Core Question: How can we train a flexible mathematical function to approximate expensive computations — and when should we trust its predictions?
The Big Picture: You’ve built N-body simulations in Project 2, and you’re now rebuilding that simulator in JAX for the final project. You’ve also built Bayesian inference engines in Project 4. Now you’ll learn to build fast approximations to expensive computations. This is the key to the final project: training a neural network to predict N-body simulation outcomes, then using that emulator for inference.
Section 1.1: The Emulation Problem Why we need fast surrogates for expensive simulations. The scientific case.
Section 1.2: From Linear Regression to Neural Networks The intellectual progression: linear models \(\to\) feature engineering \(\to\) learned features.
Section 1.3: The Computational Neuron The mathematical building blocks. Neurons, activations, and why nonlinearity matters.
Section 1.4: Layering Neurons into Networks Architecture design: hidden layers, parameter counting, and the expressiveness tradeoff.
Section 1.5: The Universal Approximation Theorem Why neural networks can (in principle) approximate any function. The profound idea and its limits.
Section 1.6: Forward Propagation How input becomes output. The computational graph perspective you know from the JAX module.
Section 1.7: Training as Optimization Loss functions, gradient descent, and the connection to likelihood maximization.
Section 1.8: Weight Initialization — Breaking Symmetry Why random initialization matters and how it enables ensembles.
Section 1.9: Practical Training Normalization, learning rates, convergence, multi-output handling, and debugging.
Section 1.10: Uncertainty via Ensembles Why single networks are dangerous, and how ensembles quantify epistemic uncertainty.
Section 1.11: The JAX Ecosystem — Equinox and Optax Professional tools for building and training neural networks.
Section 1.12: From Emulator to Inference Connecting your trained network to NumPyro for Bayesian parameter recovery.
Section 1.13: Synthesis What you’ve learned and how it all connects.