Overview: Foundations of Discrete Computing

Numerical Methods Module 1 | COMP 536: Modeling the Universe

Author

Anna Rosen

The Big Picture: When Computers Meet Calculus

A Story That Changes Everything

In 1922, Lewis Fry Richardson attempted something audacious: predict tomorrow’s weather using mathematics. Armed with differential equations that governed atmospheric flow, he organized human “computers” into a calculating factory. Each person computed derivatives and changes for their assigned atmospheric cell. After six weeks of calculations, Richardson proudly announced his prediction: the atmospheric pressure would change by 145 millibars in 6 hours.

Reality delivered a crushing blow: the actual change was 1 millibar. Richardson’s prediction was wrong by a factor of 145.

But here’s the twist: Richardson’s equations were correct. His mathematics was sound. The catastrophic failure came from something more fundamental — he was taking finite differences with steps that were too large, and his human computers were rounding numbers to save time. The errors in numerical approximation completely overwhelmed the physics.

Richardson’s failure revealed a profound truth that shapes all computational physics: when we move from the continuous mathematics of calculus to the discrete world of computers, we enter a realm where \(h \to 0\) is impossible, where \(0.1 + 0.2 \neq 0.3\), and where tiny errors can avalanche into disasters.

Your Mission: Master the Art of Approximation

You’re about to discover how to navigate the fundamental paradox of computational physics:

  • Calculus requires taking limits as \(h \to 0\)
  • Computers can’t represent infinitesimally small numbers
  • Yet somehow we can simulate galaxies, track spacecraft to Jupiter, and detect gravitational waves

The resolution of this paradox — understanding exactly what errors we introduce and how to control them — is the foundation of all computational astrophysics.

Why This Matters Now More Than Ever

Modern astrophysics pushes computational limits like never before:

  • LIGO detects strains of \(10^{-21}\) – requiring numerical methods accurate to 20+ decimal places
  • JWST data pipelines process millions of pixels where round-off errors could hide exoplanets
  • Cosmological simulations track \(10^{12}\) particles over \(10^{10}\) years where errors compound exponentially
  • Neural networks for galaxy classification compute millions of derivatives via backpropagation

You NEED to understand numerical methods at a fundamental level — not just which buttons to push, but why algorithms succeed or fail.

Learning Philosophy

This module embodies our “glass-box modeling” approach:

  • Understand every approximation: Know exactly what errors you’re introducing and why
  • Build from fundamentals: Derive methods from Taylor series and error analysis
  • Connect to physics: Every numerical choice has physical consequences
  • Embrace limitations: Finite precision isn’t a bug; it’s a feature that teaches us about our models

Module Learning Objectives

By completing this module, you will be able to:

Quick Navigation Guide

🎯 Choose Your Learning Path

🏃 Fast Track

Essential concepts only

🚶 Standard Path

Full conceptual understanding

Everything in Fast Track, plus:

🧗 Complete Path

Deep dive with all details

Complete module including:

  • All mathematical derivations
  • Custom method design
  • Automatic differentiation
  • All worked examples
  • Historical context

Mathematical Foundations

Important📖 Core Notation and Concepts

Before diving in, let’s establish our mathematical language:

Notation

Symbol Meaning First Appears
\(h\) Step size for finite differences Part 1
\(\epsilon\) Machine epsilon (\(\approx 2.2 \times 10^{-16}\) for float64) Part 2
\(O(h^p)\) Error scaling with order \(p\) Part 1
\(f^{(n)}\) \(n\)-th derivative of \(f\) Part 3
\(E_{\text{abs}}\), \(E_{\text{rel}}\) Absolute and relative errors Part 2

Key Concepts Preview

Finite Difference: Approximating derivatives using function values at discrete points: \[f'(x) \approx \frac{f(x+h) - f(x)}{h}\]

Taylor Series: Expanding functions as power series to understand approximation errors: \[f(x+h) = f(x) + hf'(x) + \frac{h^2}{2}f''(x) + O(h^3)\]

The Fundamental Trade-off: - Small \(h\) \(\to\) Better approximation but round-off errors dominate - Large \(h\) \(\to\) Avoids round-off but poor approximation - Optimal \(h\) \(\to\) Minimizes total error

Module Contents

Part 1: The Fundamental Paradox - Calculus on Computers

  • Why computers cannot take true limits
  • Forward, backward, and central differences from first principles
  • The optimal step size derivation
  • Practical algorithms for choosing \(h\)

Part 2: Numbers Aren’t Real - Computer Arithmetic & Cosmic Consequences

  • Finding and understanding machine epsilon
  • Three types of numerical error
  • Catastrophic cancellation and how to avoid it
  • Error propagation in long calculations

Part 3: Taylor Series - The Bridge from Continuous to Discrete

  • Verifying error predictions empirically
  • Designing custom finite difference formulas
  • When NOT to use numerical derivatives
  • Introduction to automatic differentiation

Part 4: Module Synthesis

  • Consolidating concepts
  • Quick reference tables
  • Connections across projects
  • Looking forward

Prerequisites Check

Note🔍 Self-Assessment

Before beginning, verify you can:

If you’re unsure about any of these, review the prerequisite material or ask for help during office hours.


Ready to begin? Let’s start with Part 1 and discover why taking derivatives on a computer is fundamentally different from the calculus you learned.