Probability

This chapter delves into **probability**, covering conditional probabilities, multiplication rules, independence of events, and Bayes' theorem, alongside applications and historical context, particularly focusing on discrete sample spaces and binomial distributions.

Notes on Probability

13.1 Introduction

Probability is the study of uncertainty and quantifies the likelihood of various outcomes in random experiments. This chapter introduces the essential principles of probability, beginning with the axiomatic approach developed by Russian mathematician A. N. Kolmogorov, which allows us to calculate probabilities based on the outcomes of random experiments. We will explore:

  1. Conditional probability – the probability of an event occurring given that another event has already occurred.
  2. The multiplication rule and the addition rule of probability.
  3. The concept of independent events.
  4. The notion of a random variable and its associated probability distribution.
  5. The binomial distribution, a key discrete probability distribution.

13.2 Conditional Probability

Definition

Conditional probability refers to the probability of an event, E, occurring given that another event, F, has already occurred. It is mathematically expressed as:

Formula (1)

[ P(E|F) = \frac{P(E \cap F)}{P(F)} \quad (P(F) ≠ 0) ]
where:

  • E is the event of interest.
  • F is the event conditioning the probability.
  • P(E ∩ F) is the probability of both events occurring.
  • P(F) is the probability of event F.

This definition shows how additional information about the occurrence of one event can change the likelihood of another event. For example, if you know that F has occurred, you reduce the sample space to those outcomes that include F.

Properties of Conditional Probability

  1. Given the total space: P(S|F) = P(F|F) = 1. This indicates certainty if you know event F has occurred.
  2. Union of Events: For any two events A and B, given F with P(F) ≠ 0: [ P((A ∪ B)|F) = P(A|F) + P(B|F) - P((A ∩ B)|F) ]
  3. Complement of Events: [ P(E'|F) = 1 - P(E|F) ]
    This indicates that if event F occurs, the probability of the opposite of E (not E) can be derived from the probability of E.

Examples Demonstrating Conditional Probability:

  • Tossing coins, drawing cards, etc., to illustrate calculating probabilities based on conditional events.

13.3 Multiplication Theorem on Probability

The multiplication rule looks at the probability of multiple events occurring:

  • For any two events E and F, we find: [ P(E ∩ F) = P(F) imes P(E|F) ]
  • This formula can be expanded for three or more events too.

Example:

To find the probability of first drawing a king and then a queen from a deck of cards.

13.4 Independent Events

Events E and F are independent if the occurrence of one does not affect the probability of the other:

  • Mathematically expressed as: [ P(E ∩ F) = P(E) imes P(F) ]
  • Important relationships include: [ P(E|F) = P(E) \quad P(F|E) = P(F) ]

Distinction:

Understanding mutually exclusive events vs independent events is crucial. Two mutually exclusive events can never happen simultaneously, whereas independent events can occur at the same time without affecting each other.

13.5 Bayes' Theorem

Bayes' theorem allows us to update the probability estimate for an event based on new evidence:

  • If {E1, E2, ..., En} is a partition of the sample space, then for any event A: [ P(E_i|A) = \frac{P(E_i)P(A|E_i)}{\sum_{j=1}^n P(E_j)P(A|E_j)} ]
    This theorem is notably useful in fields like medical testing and Bayesian statistics.

Applications

Bayes' theorem applies to various real-world situations, such as probability in disease prevalence and testing accuracy.

Conclusion

This chapter enriches your understanding of probability through concepts like conditional events, independence, and the crucial Bayes' theorem. The historical contributions to probability theory reinforce its development and application today. As you study these concepts, reflect on how they influence decision-making in uncertain scenarios.

Key terms/Concepts

  1. Conditional Probability: P(E|F) = P(E ∩ F) / P(F) when P(F) ≠ 0.
  2. Addition Rule: P(A ∪ B | F) = P(A|F) + P(B|F) - P(A ∩ B|F).
  3. Multiplication Theorem: P(E ∩ F) = P(E) P(F|E) = P(F) P(E|F).
  4. Independent Events: P(E ∩ F) = P(E) P(F) if E and F are independent.
  5. Bayes’ Theorem: P(E|A) = (P(E)P(A|E)) / Σ(P(E_i)P(A|E_i)).
  6. Discrete Probability: Understand foundational distributions like binomial distribution.
  7. Historical Contributions: Key figures include Pascal, Fermat, and Kolmogorov.
  8. Applications: Use Bayes' Theorem in real-world contexts like medical testing and decision analysis.
  9. Real-valued Functions: Random variables quantify outcomes in probability.
  10. Partition of Sample Space: Events must be exhaustive and disjoint.

Other Recommended Chapters