The mathematical language of uncertainty. Distributions, hypothesis testing, Bayesian inference. Every machine learning model speaks this language — this module teaches you to hear it.
Linear algebra gives AI its structure. Calculus gives it its learning algorithm. Statistics gives it its epistemology — its theory of knowledge under uncertainty. A neural network's output is a probability distribution. A model's confidence is a statistical statement. Without statistics, you can run AI. With statistics, you can understand what it's telling you.
A probability distribution is a complete description of uncertainty about a quantity. The normal distribution assumes symmetric randomness. The Poisson assumes rare events. The Bernoulli assumes binary outcomes. Choosing the right distribution is choosing the right model of reality. Machine learning automates this choice — but understanding the options is understanding the assumptions your model is making.
Bayes' theorem is the engine of probabilistic reasoning. You start with a prior belief. You observe evidence. You update your belief. This is precisely what machine learning does: start with random weights (prior), observe training data (evidence), update the weights (posterior). Bayesian thinking is not a metaphor for machine learning — it is the mathematical framework that is machine learning.
Statistics & Probability follows the Linear Algebra module. Interactive visualisations for distributions, hypothesis testing, and Bayesian updating are being designed.