Download Careers360 App
Bernoulli Trials and Binomial Distribution

Bernoulli Trials and Binomial Distribution

Edited By Komal Miglani | Updated on Jul 02, 2025 07:51 PM IST

Probability is a part of mathematics that deals with the likelihood of different outcomes. It plays an important role in estimating the outcome or predicting the chances of that event. Bernoulli's equation is a special type of first-order non-linear differential equation that plays an important role in various fields such as physics, engineering, and economics. Some major applications of the Bernoulli equation are used in fluid dynamics, biology, and economics.

Bernoulli Trials and Binomial Distribution
Bernoulli Trials and Binomial Distribution

Bernoulli Equation

An equation of the form

$
\frac{d y}{d x}+P(x) \cdot y=Q(x) \cdot y^n
$


Where $P(x)$ and $Q(x)$ are a function of $x$ only, is known as Bernoulli's equation.
(If we put $\mathrm{n}=0$, then the equation is in linear form.)
To reduce Eq (i) into linear form, divide both sides by $y^n$,

$
\begin{aligned}
& \quad \frac{1}{y^n} \cdot \frac{d y}{d x}+\frac{P(x)}{y^{n-1}}=Q(x) \\
& \text { or } \quad y^{-n} \cdot \frac{d y}{d x}+y^{1-n} \cdot P(x)=Q(x)
\end{aligned}
$


Putting $y^{1-n}=t$, converts this equation into a linear equation in $t$, which can then be solved

Bernoulli Trials

Trials of a random experiment are called Bernoulli trials, if they satisfy the following conditions

  1. There are a fixed number of trials. Think of trials as repetitions of an experiment. The letter $n$ denotes the number of trials.

  2. The $n$ trials are independent and are repeated using identical conditions.

  3. There are only two possible outcomes, called "success" and "failure," for each trial. The letter p denotes the probability of a success on any one trial, and q denotes the probability of a failure on any one trial. $p + q = 1$

For example, randomly guessing at a true-false statistics question has only two outcomes. If a success is guessing correctly, then a failure is guessing incorrectly. Suppose Joe always guesses correctly on any statistics true-false question with a probability $p = 0.6.$ Then, $q = 0.4.$ This means that for every true-false statistics question Joe answers, his probability of success $(p = 0.6)$ and his probability of failure $(q = 0.4)$ remain the same. So guessing one question is considered a trial. If he guesses $n$ different questions, means the trial is repeated $n$ times and $p = 0.6$ remains the same for each trial.

Binomial Distribution

A binomial distribution with $n$-Bernoulli trials and probability of success in each trial as $p$, is denoted by $B(n, p)$. The probability

$
P(X=r)={ }^n C_r p^r \cdot q^{n-r}
$

Where $P(X=r)$ is the probability of $X$ successes in $n$ trials when the probability of success in ANY ONE TRIAL is $p$. And of course $q=(1-p)$ and is the probability of a failure in any one trial.

In the experiment, the probability of
- At least " $r$ " successes,

$
P(X \geq r)=\sum_{\lambda=r}^n{ }^n C_\lambda p^\lambda \cdot q^{n-\lambda}
$

- At most "r" successes,

$
\mathrm{P}(\mathrm{X} \leq \mathrm{r})=\sum_{\lambda=0}^{\mathrm{r}}{ }^{\mathrm{n}} \mathrm{C}_\lambda \mathrm{p}^\lambda \cdot \mathrm{q}^{\mathrm{n}-\lambda}
$

The mean, $\mu$, and variance, $\sigma^2$, for the binomial probability distribution are $\mu=\mathrm{np} \quad$ and $\quad \sigma^2=\mathrm{npq}$
The standard deviation, $\sigma$, is then $\sigma=\sqrt{n p q}$.

Solved Examples Based on Bernoulli Trials and Binomial Distribution

Example 1: The probability that a student is not a swimmer is $\frac{1}{5}$. The probability that out of 5 students exactly 4 are swimmer is
1) $\left(\frac{4}{5}\right)^3$
2) $\left(\frac{4}{5}\right)^4$
3) ${ }^5 \mathrm{C}_4\left(\frac{4}{5}\right)^4$
4) none of these

Solution
Binomial Theorem on Probability -
If an experiment is repeated n times under similar conditions we say that n trials of the experiment have been made.
Let $E$ be an event.
$P=$ the Probability of occurrence of event $E$ in one trial.
$q=1-p=$ probability of nonoccurrence of event $E$ in one trial such that $p+q=1$
$x=$ number of successes.

|Let $p=$ probability that a student selected at random is a swimmer $=1-\frac{1}{5}=\frac{4}{5}$

The probability that exactly 4 students are swimmers is

$
{ }^5 C_4 p^4 q=5\left(\frac{4}{5}\right)^4 \frac{1}{5}=\left(\frac{4}{5}\right)^4
$

Hence, the answer is the option (2).

Example 2: If the probability of hitting a target by a shooter, in any shot, is $\frac{1}{3}$, then the minimum number of independent shots at the target required by him so that the probability of hitting the target at least once is greater than $\frac{5}{6}$, is:
1) 6
2) 5
3) 4
4) 3

Solution
Binomial Theorem on Probability -
If an experiment is repeated n times under similar conditions we say that n trials of the experiment have been made.
Let E be an event.
$P=$ the Probability of occurrence of event $E$ in one trial.
$q=1-p=$ probability of nonoccurrence of event $E$ in one trial such that $p+q=1$
$\mathrm{x}=$ number of successes.
Binomial Theorem on Probability -
Then

$
\begin{aligned}
& P(X=r) \quad \text { or } P(r) \\
& ={ }^n C_{\Upsilon} \cdot P^r \cdot q^{n-r}
\end{aligned}
$

From the concept

$
\begin{aligned}
& 1-{ }_0^n \mathrm{C}\left(\frac{1}{3}\right)^0\left(\frac{2}{3}\right)^n>\frac{5}{6} \\
& \frac{1}{6}>\left(\frac{2}{3}\right)^n \\
& =>0.16>\left(\frac{2}{3}\right)^n \\
& n_{\min }=5
\end{aligned}
$

Hence, the answer is the option (2).

Example 3: One hundred identical coins each with probability $p$ of showing heads are tossed once. If $0<p<1$ and the probability of heads showing on 50 coins is equal to that of heads showing on 51 coins, the value of $p$ is
1) $\frac{1}{2}$
2) $\frac{51}{101}$
3) $\frac{49}{101}$
4) none of these

Solution

Binomial Theorem on Probability -

Then

$
\begin{aligned}
& P(X=r) \quad \text { or } P(r) \\
& ={ }^n C_{\Upsilon} \cdot P^r \cdot q^{n-r}
\end{aligned}
$

Let $X$ be the number of coins showing heads.
Then $X$ follows a binomial distribution with parameters $n=100$ and $p$.

$
\begin{aligned}
& \text { since } P(X=50)=P(X=51) \\
& \quad \Rightarrow \frac{100!}{50!50!} \cdot \frac{51!49!}{100!}=\frac{p}{1-p} \\
& \Rightarrow \frac{51}{50}=\frac{p}{1-p} \\
& \Rightarrow 51-51 p=50 p \\
& \Rightarrow p=\frac{51}{101}
\end{aligned}
$

Hence, the answer is the option (2).

Example 4: South African cricket captain lost the toss of a coin 13 times out of 14 . The chance of this happening was:
1) $\frac{7}{2^{13}}$
2) $\frac{1}{2^{13}}$
3) $\frac{13}{2^{14}}$
4) $\frac{13}{2^{13}}$

Solution
As we have learned in
Binomial Theorem on Probability -
Then

$
\begin{aligned}
& P(X=r) \text { or } P(r) \\
& ={ }^n C_{\Upsilon} \cdot P^r \cdot q^{n-r}
\end{aligned}
$


$
P={ }^{14} C_{13}\left(\frac{1}{2}\right)^{13}\left(\frac{1}{2}\right)^{14-13}=14 \times \frac{1}{2^{13}} \frac{1}{2}=\frac{7}{2^{13}}
$

Hence, the answer is the option (1).

Example 5: For the initial screening of an admission test, a candidate is given fifty problems to solve. If the probability that the candidate can solve any problem is $\frac{4}{5}$, then the probability that he is unable to solve less than two problems is:
1) $\frac{201}{5}\left(\frac{1}{5}\right)^{49}$
2) $\frac{316}{25}\left(\frac{4}{}\right)5^{48}$
3) $\frac{54}{5}\left(\frac{4}{5}\right)^{49}$
4) $\frac{164}{25}\left(\frac{1}{5}\right)^{48}$

Solution
Binomial Theorem on Probability
Then

$
\begin{aligned}
& P(X=r) \quad \text { or } P(r) \\
& ={ }^n C_{\Upsilon} \cdot P^r \cdot q^{n-r}
\end{aligned}
$

Probability of Solving the problems out of 50 problems $=\frac{4}{5}$

$
\Rightarrow P(\text { not solving })=\frac{1}{5}
$

The probability that he is unable to solve less than two problems is :
$P$ (zero correct ) $+P$ (one correct)

$
\left(\frac{1}{5}\right)^{50}+{ }^{50} \mathrm{C}_1\left(\frac{4}{5}\right)^{49}\left(\frac{1}{5}\right)^1
$

(using Binominal Probability)

$
\frac{54}{5}\left(\frac{4}{5}\right)^{49}
$

Hence, the answer is option (3).

Frequently Asked Questions (FAQs)

1. What is a probability?

Probability is a part of mathematics that deals with the likelihood of different outcomes.

2. Give the Bernoulli equation.

An equation of the form

$
\frac{d y}{d x}+P(x) \cdot y=Q(x) \cdot y^n
$

Where $P(x)$ and $Q(x)$ are a function of $x$ only, is known as Bernoulli's equation.

3. What is Bernoulli Trial?

Trials of a random experiment are called Bernoulli trials, if they satisfy the following conditions 

  1. There are a fixed number of trials. Think of trials as repetitions of an experiment. The letter $n$ denotes the number of trials.

  2. The $n$ trials are independent and are repeated using identical conditions. 

  3. There are only two possible outcomes, called "success" and "failure," for each trial. The letter p denotes the probability of a success on any one trial, and q denotes the probability of a failure on any one trial. $p + q = 1$

4. What is Bernoulli Probability Distribution?

A binomial distribution with $n$-Bernoulli trials and probability of success in each trial as $p$, is denoted by $B(n, p)$. The probability 

$
P(X=r)={ }^n C_r p^r \cdot q^{n-r}
$

Where $P(X=r)$ is the probability of $X$ successes in $n$ trials when the probability of success in ANY ONE TRIAL is $p$. And of course $q=(1-p)$ and is the probability of a failure in any one trial.

5. Give the mean, variance and the standard deviation for the Bernoulli distribution.

The mean, $\mu$, and variance, $\sigma^2$, for the binomial probability distribution are $\mu=\mathrm{np} \quad$ and $\quad \sigma^2=\mathrm{npq}$
The standard deviation, $\sigma$, is then $\sigma=\sqrt{n p q}$.

6. What is a Bernoulli trial?
A Bernoulli trial is a random experiment with exactly two possible outcomes: success or failure. Each trial is independent, and the probability of success remains constant for all trials. Examples include flipping a coin, where heads might be considered success and tails failure.
7. How does a Bernoulli trial relate to the binomial distribution?
The binomial distribution describes the probability of achieving a specific number of successes in a fixed number of independent Bernoulli trials. It's essentially a way to model multiple Bernoulli trials performed under the same conditions.
8. What are the key parameters of a binomial distribution?
The binomial distribution has two key parameters: n (the number of trials) and p (the probability of success on each trial). These parameters fully define the shape and characteristics of the distribution.
9. Why is the binomial distribution called "binomial"?
The term "binomial" refers to the fact that each trial has two possible outcomes (success or failure). The probabilities for these outcomes can be represented by the terms of a binomial expansion (p + q)^n, where p is the probability of success, q is the probability of failure (1-p), and n is the number of trials.
10. How do you calculate the probability of exactly k successes in n trials?
The probability of exactly k successes in n trials is calculated using the binomial probability formula: P(X = k) = C(n,k) * p^k * (1-p)^(n-k), where C(n,k) is the number of ways to choose k items from n items, p is the probability of success on each trial, and 1-p is the probability of failure.
11. What is the difference between a Bernoulli distribution and a binomial distribution?
A Bernoulli distribution describes the outcome of a single trial with two possible outcomes, while a binomial distribution describes the number of successes in a fixed number of independent Bernoulli trials. The Bernoulli distribution is essentially a special case of the binomial distribution where n=1.
12. Can you have a fractional number of successes in a binomial distribution?
No, the number of successes in a binomial distribution must be a whole number. The binomial distribution is a discrete probability distribution, meaning it only deals with integer values of successes.
13. What is the expected value (mean) of a binomial distribution?
The expected value or mean of a binomial distribution is np, where n is the number of trials and p is the probability of success on each trial. This represents the average number of successes you would expect over many repetitions of the experiment.
14. How do you calculate the variance of a binomial distribution?
The variance of a binomial distribution is given by np(1-p), where n is the number of trials and p is the probability of success on each trial. This measures the spread or dispersion of the distribution around its mean.
15. What is the standard deviation of a binomial distribution?
The standard deviation of a binomial distribution is the square root of its variance. It's calculated as √(np(1-p)), where n is the number of trials and p is the probability of success on each trial.
16. How does changing the probability of success (p) affect the shape of the binomial distribution?
As p approaches 0.5, the distribution becomes more symmetric. When p is close to 0 or 1, the distribution becomes more skewed. For p = 0.5, the distribution is perfectly symmetric.
17. What happens to the binomial distribution as the number of trials (n) increases?
As n increases, the binomial distribution tends to approximate a normal distribution, especially when np and n(1-p) are both greater than 5. This is known as the normal approximation to the binomial distribution.
18. Can the binomial distribution be used for dependent events?
No, the binomial distribution assumes that all trials are independent. If the events are dependent (the outcome of one trial affects the probability of success in subsequent trials), the binomial distribution is not appropriate.
19. What is the difference between "with replacement" and "without replacement" in the context of Bernoulli trials?
"With replacement" means that after each trial, the item is put back before the next trial, keeping the probability of success constant. "Without replacement" means items are not replaced, changing the probability of success for subsequent trials. The binomial distribution assumes trials are with replacement or from an infinite population.
20. How does the binomial distribution relate to the concept of sampling?
The binomial distribution can model sampling from a large population when we're interested in the presence or absence of a particular characteristic. Each "trial" represents selecting an individual from the population, and "success" means the individual has the characteristic of interest.
21. What is the relationship between the binomial coefficient and the binomial distribution?
The binomial coefficient, often denoted as C(n,k) or (n choose k), represents the number of ways to choose k successes from n trials. It's a crucial part of the binomial probability formula, accounting for all the possible ways to achieve k successes in n trials.
22. How do you determine if a scenario follows a binomial distribution?
A scenario follows a binomial distribution if it meets these criteria: 1) There's a fixed number of trials, 2) Each trial is independent, 3) There are only two possible outcomes per trial, 4) The probability of success remains constant for all trials.
23. What is the cumulative binomial distribution?
The cumulative binomial distribution gives the probability of obtaining up to and including a certain number of successes in n trials. It's calculated by summing the probabilities of all outcomes from 0 successes up to the specified number of successes.
24. How does the binomial distribution relate to coin flipping?
Coin flipping is a classic example of a binomial distribution. Each flip is a Bernoulli trial with two outcomes (heads or tails), typically with p=0.5. The number of heads in a series of flips follows a binomial distribution.
25. Can the binomial distribution have a probability of success greater than 1 or less than 0?
No, the probability of success (p) in a binomial distribution must always be between 0 and 1, inclusive. This is because p represents a probability, which is always in this range.
26. What is the mode of a binomial distribution?
The mode of a binomial distribution is the most likely number of successes. It's given by floor((n+1)p), where n is the number of trials and p is the probability of success. If (n+1)p is an integer, there are two modes: (n+1)p and (n+1)p - 1.
27. How does the binomial distribution relate to the law of large numbers?
The law of large numbers states that as the number of trials increases, the sample mean approaches the expected value. For a binomial distribution, this means that as n increases, the proportion of successes tends to get closer to p.
28. What is the difference between a binomial experiment and a binomial random variable?
A binomial experiment is the process of conducting n independent Bernoulli trials. A binomial random variable is the number of successes observed in this experiment. The binomial distribution describes the probability distribution of this random variable.
29. How can you use the binomial distribution to calculate probabilities for "at least" or "at most" scenarios?
For "at least" k successes, sum the probabilities from k to n. For "at most" k successes, sum the probabilities from 0 to k. Alternatively, for "at least" scenarios, you can subtract the cumulative probability of k-1 successes from 1.
30. What is the relationship between the mean and variance of a binomial distribution?
The mean of a binomial distribution is always greater than or equal to its variance. They are equal only when p=0 or p=1. The variance reaches its maximum when p=0.5, at which point it equals n/4.
31. How does the binomial distribution relate to the Poisson distribution?
The Poisson distribution can be used as an approximation to the binomial distribution when n is large and p is small. Specifically, when n ≥ 20 and np ≤ 7, the Poisson distribution with λ = np provides a good approximation.
32. What is the skewness of a binomial distribution?
The skewness of a binomial distribution is given by (1-2p) / √(np(1-p)). When p = 0.5, the distribution is symmetric and has zero skewness. For p < 0.5, the distribution is positively skewed, and for p > 0.5, it's negatively skewed.
33. How does the concept of "trials to first success" relate to the binomial distribution?
While the binomial distribution models the number of successes in a fixed number of trials, the number of trials until the first success follows a different distribution called the geometric distribution. However, both are based on Bernoulli trials.
34. Can a binomial distribution ever be uniform?
A binomial distribution can appear approximately uniform when n is odd and p is very close to 0.5. However, it's never exactly uniform for finite n, as the probabilities for extreme values (0 and n successes) are always slightly lower than for middle values.
35. How does the central limit theorem apply to the binomial distribution?
The central limit theorem states that for large n, the distribution of the sample mean approaches a normal distribution. For a binomial distribution, this means that as n increases, the distribution of (X - np) / √(np(1-p)) approaches a standard normal distribution.
36. What is the moment generating function of a binomial distribution?
The moment generating function of a binomial distribution is M(t) = (1-p+pe^t)^n, where p is the probability of success, n is the number of trials, and e is the base of natural logarithms. This function can be used to derive various properties of the distribution.
37. How does the binomial distribution relate to hypothesis testing?
The binomial distribution is often used in hypothesis testing, particularly for testing proportions. For example, it can be used to test whether an observed proportion of successes in a sample is significantly different from an expected proportion in the population.
38. What is the difference between a binomial probability and a binomial likelihood?
A binomial probability calculates the chance of a specific outcome given known parameters (n and p). A binomial likelihood, used in statistical inference, treats the observed data as fixed and considers how likely different parameter values are, given that data.
39. How does the concept of "success" and "failure" in Bernoulli trials relate to real-world scenarios?
In Bernoulli trials, "success" and "failure" are arbitrary labels for the two possible outcomes. In real-world scenarios, these could represent any binary outcome: yes/no, defective/non-defective, infected/not infected, etc. The key is that there are only two mutually exclusive possibilities.
40. Can you have a binomial distribution with a non-integer number of trials?
No, the number of trials (n) in a binomial distribution must be a positive integer. If you need to model a non-integer number of trials, you might need to consider other distributions or modify your approach.
41. How does the binomial distribution relate to the concept of statistical power?
Statistical power in hypothesis testing is the probability of correctly rejecting a false null hypothesis. For tests involving proportions, the binomial distribution is used to calculate this power, considering the sample size (n), the true proportion (p), and the effect size you want to detect.
42. What is the relationship between the binomial distribution and the beta distribution?
The beta distribution is the conjugate prior for the binomial distribution in Bayesian statistics. This means that if you start with a beta prior for the probability of success and observe binomial data, your posterior distribution for the probability of success will also be a beta distribution.
43. How can you use the binomial distribution to model rare events?
For rare events where p is very small and n is large, the binomial distribution can be approximated by the Poisson distribution. This is useful in modeling events like radioactive decay, traffic accidents, or defects in manufacturing, where the probability of occurrence for any single item or time period is low.
44. What is the difference between a binomial process and a Bernoulli process?
A Bernoulli process is a sequence of independent Bernoulli trials, potentially infinite in number. A binomial process specifically refers to a fixed number (n) of Bernoulli trials. The binomial distribution describes the outcomes of a binomial process.
45. How does the concept of "trials until r successes" relate to the binomial distribution?
While the binomial distribution models a fixed number of trials, the number of trials until r successes follows a different distribution called the negative binomial distribution. Both are based on Bernoulli trials, but they answer different questions about the process.
46. Can you have a binomial distribution with p = 0 or p = 1?
Yes, you can technically have a binomial distribution with p = 0 or p = 1, but these are degenerate cases. If p = 0, you'll always get 0 successes, and if p = 1, you'll always get n successes. In practice, these cases are usually not very interesting or useful.
47. How does the binomial distribution relate to the concept of expected value in probability theory?
The expected value of a binomial distribution (np) is a key concept in probability theory. It represents the long-term average outcome if the experiment were repeated many times. This concept extends to more complex scenarios and forms the basis for many statistical techniques.
48. What is the relationship between the binomial distribution and the normal distribution?
As the number of trials (n) increases, the binomial distribution approaches a normal distribution. This is known as the normal approximation to the binomial and is generally considered good when np > 5 and n(1-p) > 5. This relationship is a specific case of the central limit theorem.
49. How does the concept of independence in Bernoulli trials relate to covariance?
Independence in Bernoulli trials means that the outcome of one trial does not affect the outcomes of other trials. In terms of covariance, this means that the covariance between any two trials in a binomial experiment is zero. This property is crucial for the validity of the binomial distribution.
50. What is the probability generating function of a binomial distribution?
The probability generating function of a binomial distribution is G(z) = (1-p+pz)^n, where p is the probability of success, n is the number of trials, and z is a complex variable. This function can be used to derive various properties of the distribution, similar to the moment generating function.
51. How does the binomial distribution relate to the concept of information entropy?
The information entropy of a binomial distribution measures the average amount of information contained in each trial. It's maximized when p = 0.5, reflecting maximum uncertainty about the outcome of each trial. This concept connects probability theory with information theory.
52. What is the relationship between the binomial distribution and the hypergeometric distribution?
Both distributions model the number of successes in a series of trials, but the hypergeometric distribution is used for sampling without replacement from a finite population. The binomial distribution can be seen as a limiting case of the hypergeometric distribution as the population size becomes very large.
53. How can the binomial distribution be used in quality control?
In quality control, the binomial distribution can model the number of defective items in a sample. For example, if each item has a probability p of being defective, the number of defective items in a sample of n items follows a binomial distribution. This can be used to set acceptance criteria for batches of products.
54. What is the characteristic function of a binomial distribution?
The characteristic function of a binomial distribution is φ(t) = (1-p+pe^(it))^n, where i is the imaginary unit, t is a real number, p is the probability of success, and n is the number of trials. This function is closely related to the moment generating function and can be used to derive distribution properties.
55. How does the concept of sufficient statistics relate to the binomial distribution?
For a binomial distribution, the number of successes (k) is a sufficient statistic for the parameter p. This means that k contains all the information in the sample that is relevant for estimating p. No other aspect of the data provides additional information about p once k is known. This concept is fundamental in statistical inference.

Articles

Back to top