A brief introduction to the probabilistic method, followed by some basic bounds and a few examples.
Alon, Spencer, and Erdős (1992) describe the method as follows:
In order to prove the existence of a combinatorial structure with certain properties, we construct an appropriate probability space and show that a randomly chosen element in the space has the desired properties with positive probability.
The method has come into wide use since about 1950, even in areas of mathematics that, a-priori, have nothing to do with probability. This is roughly because probability provides a useful high-level conceptual framework for understanding (formalizing, organizing, communicating) many widely applicable counting and averaging arguments. This probabilistic viewpoint is called the probabilistic lens.
Alon et al. note that, although in principle analysis with the probabilistic method can be replaced by direct arguments,
…in practice, the probability is essential. It would be hopeless to replace the applications of many of the tools [by counting arguments]. [1, page 2]
In Computer Science, the method is used in randomized rounding, to design approximation algorithms for combinatorial optimization problems.
Familiarity with the basic concepts of probability: random experiments, variables, and events. (Reference materials on these topics are listed at the end, below.)
Notes on the probabilistic method
Existence proofs, the naive union bound, linearity of expectation, Markov bound.
Examples: 2-approximation for Max Cut; Turán’s theorem
The method of conditional probabilities is a systematic method for converting non-constructive probabilistic existence proofs into efficient deterministic algorithms that explicitly construct the desired object.
The method of conditional probabilities converts a probabilistic existence proof into a deterministic algorithm.
In applying the method of conditional probabilities, exact conditional probabilities (or expectations) are sometimes hard to compute. Pessimistic estimators can be used instead. We illustrate the idea by example, using Turán’s theorem.
Pessimistic estimators in the method of conditional probabilities. A brief introduction to Chernoff bounds.
If you’re already familiar with Chernoff bounds, you may prefer to skip directly to the statement and proof of a typical Chernoff bound.
Chernoff bounds (a.k.a. tail bounds, Hoeffding/Azuma/Talagrand inequalities, the method of bounded differences, etc. [1, 2]) are used to bound the probability that some function (typically a sum) of many “small” random variables falls in the tail of its distribution (far from its expectation).
The statement and proof of a typical Chernoff bound.
Reference materials on basic probability, available online
Notes on Discrete Probability — by Trevisan
Introduction to Probability — by Grinstead and Snell (see Chapters 1, 4, 6)
Lecture Notes on Probability Theory and Random Processes — by Walrand (see Chapters 1–6)