The beauty of the Chernoff bound, like everything admirable, lies in the minimalism.
In the simplest form, it does this: Give me a random number between -1 and 1, with mean 0 and variance \sigma. The Chernoff bound can tell you themaximum possible probability of it being greater than a number \alpha, say p.
If the probability were any higher than p, the variance would need to be bumped up above the given \sigma. So this probability is intimately linked to \sigma.
Sketch of proof:
- Prove for single random variable
- Extend proof to two random variables.
- Invoke mathematical induction.