Project 3: Monte Carlo Radiative Transfer in a Dusty Star Cluster
COMP 536 | Short Projects
Due: Tuesday March 24, 2026 by 11:59 PM
Collaboration Guidelines
Core algorithms must be independently implemented. You may share debugging strategies, test cases, and plotting code.
Review the Project Submission Guide for collaboration best practices.
You’re building one MCRT code that propagates photon packets through a dusty medium. Start with the simplest case (1 star, 1 band), verify it works, then scale up. Adding more stars is just a loop over sources. Adding more bands is just a loop over wavelengths. The core algorithm doesn’t change.
Learning Objectives
Upon successful completion of this project, you will be able to:
Core Physics Understanding:
- Explain how Monte Carlo methods exactly solve the radiative transfer equation through statistical sampling rather than approximation.
- Connect discrete absorption to the exponential probability distributions inherent in radiative transfer.
- Demonstrate that photon packet propagation naturally samples Beer’s law (\(I = I_0 e^{-\tau}\)) and reproduces the formal solution.
- Analyze wavelength-dependent extinction and connect it to dust grain physics (e.g., why \(\kappa_B \gg \kappa_K\)).
Computational Skills:
- Implement Monte Carlo radiative transfer algorithms for 3D homogeneous media with multiple sources.
- Apply inverse transform sampling to generate optical depths from the distribution \(p(\tau) = e^{-\tau}\).
- Design efficient ray-marching algorithms through gridded density fields with proper boundary handling.
- Construct luminosity-weighted sampling schemes for multiple stellar sources across wavelength bands.
Validation & Analysis:
Validate numerical codes against analytical solutions (\(f_{\text{esc}} = e^{-\tau}\) for uniform media).
Quantify Monte Carlo convergence and verify its \(1/\sqrt{N}\) error scaling behavior.
Demonstrate energy conservation to 0.1% precision through careful luminosity bookkeeping.
Compare multi-band escape fractions to understand differential extinction \((K > V > B)\).
Scientific Interpretation:
Predict observational signatures of embedded star clusters in color-magnitude diagrams.
Interpret how dust geometry creates shadows, reddening, and differential extinction patterns.
Connect computational results to real observations (e.g., JWST’s ability to penetrate dust in IR).
Assess how stellar position within the cluster affects individual escape probabilities.
Project Overview & Scientific Context
Mission Brief
You’re modeling a young star cluster embedded in a dusty molecular cloud. Your goal: build a Monte Carlo radiative transfer code to predict how dust extinction affects the observed starlight. This project connects microscopic physics (dust absorption) to macroscopic observables (colors and extinction).
Required Background from Module 4
This project builds directly on Module 4’s comprehensive radiative transfer framework. Please ensure you have thoroughly reviewed the course materials on radiative transfer:
- Part I: The Hidden Physics in Every Astronomical Image
- Part II: Mathematical Foundations of Radiative Transfer
- Part III: Monte Carlo Solutions
Key Concepts You Should Understand
Before starting this project, you should be comfortable with the following concepts:
The Physical Picture: Monte Carlo Radiative Transfer
Monte Carlo Radiative Transfer (MCRT) solves the radiative transfer equation through a stochastic process, meaning it uses randomness and probability to represent physical reality. Each “photon packet” represents a bundle of photons carrying a fraction of the source luminosity. The stochastic nature manifests in two key ways: (1) the random sampling of interaction distances from an exponential distribution, and (2) the random emission directions. Through the law of large numbers, these random processes converge to the deterministic solution of the radiative transfer equation.
Discrete Absorption: In this project, you’ll implement discrete absorption where each packet either:
- Deposits ALL its energy at a single interaction point (when the sampled optical depth \(\tau\) is reached), or
- Escapes with ALL its energy (if it reaches the boundary first)
There is NO gradual energy loss - it’s all or nothing! As \(N \to \infty\), this stochastic Monte Carlo approach converges to the exact solution of the radiative transfer equation.
The Physical System
Grid Setup
Domain: Cubic box extending from \((-L/2, -L/2, -L/2)\) to \((L/2, L/2, L/2)\) where \(L = 1\) pc.
Resolution: \(64^3\) Cartesian cells for final results (use CGS units: 1 pc = \(3.086 \times 10^{18}\) cm). Use lower resolution (\(16^3\), \(32^3\)) for debugging.
Cell size: \(\Delta x = L/N_{\text{cells}}\) in each dimension (e.g., \(L/64\) for \(64^3\) grid).
Cell indexing: For position \((x, y, z)\), the cell indices are:
- \(i_x = \lfloor (x + L/2) / \Delta x \rfloor\)
- \(i_y = \lfloor (y + L/2) / \Delta y \rfloor\)
- \(i_z = \lfloor (z + L/2) / \Delta z \rfloor\)
- Where \(\lfloor \cdot \rfloor\) denotes the floor function (integer part)
Star Cluster Configuration
5 ZAMS stars with the following properties (\(Z=0.02\)):
| Star | Mass (\(M_\odot\)) | \(T_{\text{eff}}\) (K) | \(L\) (\(L_\odot\)) | \(R\) (\(R_\odot\)) | Spectral Type |
|---|---|---|---|---|---|
| 1 | 30 | ~38,500 | ~119,000 | ~7.7 | O7V |
| 2 | 20 | ~34,000 | ~43,000 | ~6.0 | O9V |
| 3 | 10 | ~25,000 | ~5,500 | ~3.9 | B0V |
| 4 | 5 | ~17,000 | ~530 | ~2.6 | B5V |
| 5 | 2 | ~9,100 | ~16 | ~1.6 | A5V |
Note: Use your ZAMS models from Project 1 to calculate exact values.
- Stellar positions: Uniformly distributed random positions within a \(0.75^3\) pc\(^3\) sub-volume centered at origin (this keeps stars away from boundaries to avoid edge effects)
- Luminosities and temperatures: From ZAMS models
Dusty Medium
Density Calculations:
Number density: \(n_H = 1000\) cm\(^{-3}\) (molecular hydrogen)
Molecular weight: \(\mu = 2.3\) (for H\(_2\))
Proton mass: \(m_H = 1.67 \times 10^{-24}\) g
Gas density: \(\rho_{\text{gas}} = n_H \times \mu \times m_H = 3.84 \times 10^{-21}\) g/cm\(^3\)
Dust-to-gas mass ratio: \(f_{D/G} = 0.01\)
Dust density: \(\rho_{\text{dust}} = \rho_{\text{gas}} \times f_{D/G} = 3.84 \times 10^{-23}\) g/cm\(^3\)
Dust Model: Draine (2003a,b) Milky Way dust with \(R_V = 5.5\). All data here: https://www.astro.princeton.edu/~draine/dust/dustmix.html
- Data file: kext_albedo_WD_MW_5.5A_30_D03
Note on units: Use CGS units as your default throughout this project.
Reading the Draine opacity file:
Skip header lines (~80 lines) when reading the file, these contain metadata
Tabulated Data columns
- Tabulated quantities:
- lambda = wavelength in vacuo (micron)
- albedo = (scattering cross section)/(extinction cross section)
- \(<\cos>\) = \(<\cos(\text{theta})>\) for scattered light
- C_ext/H = extinction cross section per H nucleon (cm^2/H); Note: this is per H nucleon and includes effects of both absorption and scattering.
- K_abs = absorption cross section per mass of dust (cm^2/gram)
- <cos^2> = <cos^(theta)> for scattered light
Important: Use the K_abs column directly for mass absorption coefficient (cm\(^2\)/g of dust)
This project ignores scattering entirely. Use the K_abs column (mass absorption coefficient, cm\(^2\)/g) directly. Do not use C_ext/H (extinction cross section per H nucleon) or attempt (1 - albedo) corrections. Your optical depth is pure absorption:
\[\tau = \kappa_{\mathrm{abs}} \, \rho_{\mathrm{dust}} \, \Delta s\]
If you accidentally use extinction instead of absorption, your K-band escape fractions will be too low and your results won’t make physical sense.
Band-Averaged Opacities
You’ll calculate Planck mean opacities for each band by integrating the Draine data weighted by each star’s Planck function:
\[\langle\kappa\rangle_{\text{band,star}} = \frac{\int_{\lambda_1}^{\lambda_2} \kappa(\lambda) B_\lambda(T_{\text{eff,star}}) d\lambda}{\int_{\lambda_1}^{\lambda_2} B_\lambda(T_{\text{eff,star}}) d\lambda}\]
Why Planck mean opacity? Each star emits radiation following its Planck function \(B_\nu(T_\text{eff})\). The Planck mean weights the dust opacity by the actual spectrum of light emitted by the star. This gives the effective opacity “seen” by photons from that specific star. Since hot stars emit more blue light and cool stars emit more red light, each star experiences a different effective dust opacity in each band.
Important: Calculate separate opacities for each star-band combination. A 38,500 K star will have different band-averaged opacities than a 9,100 K star, even using the same dust model, because their emission spectra weight the wavelength-dependent \(\kappa_{\lambda}\) differently.
Tip: When creating each star, compute and store \(\langle\kappa\rangle_\text{band}\) for each band in a dictionary or array as a star attribute for quick lookup during packet emission.
Primary Bands:
| Band | Wavelength Range | Central \(\lambda\) | Notes |
|---|---|---|---|
| B | 390-500 nm | 445 nm | Blue optical |
| V | 500-600 nm | 551 nm | Visual |
| K | 1.95-2.40 \(\mu\)m | 2.19 \(\mu\)m | Near-infrared |
Important: Convert all wavelengths to cm for CGS consistency (1 nm = \(10^{-7}\) cm, 1 \(\mu\)m = \(10^{-4}\) cm)
Conversion from Draine file: The file provides the wavelength-dependent \(\kappa_{\text{dust}}\) (per unit dust mass), so \(\tau = \kappa_\text{dust} \rho_\text{dust}\).
Stellar Sources and Band Luminosities
Update your star.py class from Project 1 to calculate band-specific luminosities. Each star emits differently in each wavelength band based on its temperature:
\[\langle L \rangle_{\text{band}} = L_{\text{bol}} \times \frac{\int_{\lambda_1}^{\lambda_2} B_\lambda(T_{\text{eff}}) d\lambda}{\int_{0}^{\infty} B_\lambda(T_{\text{eff}}) d\lambda}\]
where the denominator equals \(\sigma T_{\text{eff}}^4 / \pi\) (from integrating the Planck function over all wavelengths).
The Planck function is: \[B_\lambda(T) = \frac{2hc^2}{\lambda^5} \frac{1}{e^{hc/(\lambda kT)} - 1}\]
Important: Each star’s band luminosity depends on its temperature. Hot stars emit predominantly in B-band while cool stars emit more in K-band. Calculate \(\langle L \rangle_{\text{band}}\) for each star individually.
Key Physics You’ll Implement
Optical Depth and Absorption
The heart of Monte Carlo radiative transfer is determining where photons interact with dust:
Sample interaction optical depth: \(\tau_\mathrm{sample} = -\ln(\xi)\) where \(\xi \sim \mathcal{U}(0,1)\) (uniform random number between 0 and 1).
Use discrete absorption:
- March packet through grid accumulating optical depth:
\[ \tau_\text{accumulated} = \sum \kappa \rho_\text{dust} \Delta s\]
where \(\Delta s\) is the distance traveled in the current cell.
When \(\tau_\text{accumulated} \geq \tau_\text{sample}\): deposit ALL packet luminosity at that point
If packet reaches boundary before interaction: escapes with ALL luminosity
For each band (B, V, K), each packet carries:
\[ L_{\text{packet}} = \frac{L_{\text{band,total}}}{N_{\text{packets}}} \text{ where} L_{\text{band,total}} = \sum_{\text{stars}} L_{\text{star,band}} \]
(sum over all stars for THIS band only). This ensures uniform Monte Carlo statistics within each band’s simulation — every packet in that band contributes equally to the error regardless of which star emits it. This is importance sampling: source selection carries the luminosity weighting (brighter stars emit more packets), while equal packet weights minimize variance and keep error accounting clean. If packets had different weights, your convergence analysis would need weighted statistics.
Critical Point: There is no partial absorption! Each packet either deposits 100% of its energy or 0%. This binary outcome, averaged over millions of packets, reproduces the continuous radiation field.
Packet Emission Geometry
While stellar radii are tiny compared to the box size (\(R_{\text{star}}/L_{\text{box}} \sim 10^{-7}\)), packets must start from the stellar surface:
Initial position: Uniformly sampled on stellar sphere
Sample two uniform random numbers: \(\xi_1, \xi_2 \in (0,1)\)
Convert to spherical coordinates on the unit sphere:
\(\theta = \arccos(2\xi_1 - 1)\) (polar angle, uniform in \(\cos\theta\))
\(\phi = 2\pi\xi_2\) (azimuthal angle, uniform in \(\phi\))
Position on stellar surface. Let star position be \(\vec{r}_* = (x_*, y_*, z_*)\) then:
\(x = x_* + R_{\text{star}} \times \sin\theta \times \cos\phi\)
\(y = y_* + R_{\text{star}} \times \sin\theta \times \sin\phi\)
\(z = z_* + R_{\text{star}} \times \cos\theta\)
Initial direction: Your independently sampled isotropic direction \(\phi\) and \(\theta\) will be the packet’s propagation direction (i.e., unit vector \(\hat{n}\)):
\[\hat{n} = (\sin\theta_{\text{dir}}\cos\phi_{\text{dir}}, \sin\theta_{\text{dir}}\sin\phi_{\text{dir}}, \cos\theta_{\text{dir}})\]
Important note: Store \(\hat{n}\) as a photon attribute!
Packet Propagation
Since we’re modeling pure absorption (no scattering), photon packets travel in straight lines:
Direction doesn’t change: Once emitted with direction \(\hat{n}\), the packet maintains this direction
Position update:
\[\vec{r}_\text{new} = \vec{r}_\text{old} + \Delta s \cdot \hat{n}\]
where \(\Delta s\) is the distance to the next cell boundary or interaction point.
Boundary checking: If \(\vec{r}_\text{new}\) crosses any box boundary, the packet escapes.
Cell indexing: Update cell indices after each step to determine local density and opacity.
Cell boundary crossing: Calculate distance to next cell boundary in each dimension and take the minimum \(\Delta s\) to step to the next cell.
Optical depth accumulation: \(\tau_{accumulated} = \tau_{accumulated} + \kappa \rho_{dust} \Delta s\)
Tip: Handle the special case where a direction component is exactly zero (e.g., dx = 0) by setting the distance to that boundary to infinity rather than dividing by zero.
A classic MCRT foot-gun: a packet steps exactly to a cell boundary, and floating-point equality causes it to re-evaluate in the same cell forever. After crossing a boundary, nudge the packet slightly into the next cell:
# After stepping to boundary, nudge into next cell
epsilon = 1e-10 * cell_size
position += epsilon * directionThis costs you nothing in accuracy (\(10^{-10}\) of a cell) but saves hours of debugging.
Expected Opacity Values
| Band | \(\langle\kappa\rangle\) (cm\(^2\)/g) | Why This Value? |
|---|---|---|
| B | \(\sim 1.2 \times 10^4\) | Peak dust extinction near 220 nm |
| V | \(\sim 7.3 \times 10^3\) | Still strong extinction |
| K | \(\sim 1.5 \times 10^3\) | Wavelength \(\gg\) grain size |
These apply to dust mass, not gas mass! These are Planck-weighted, band-averaged values for a representative stellar temperature. Expect star-to-star variation at the tens-of-percent level — your ordering (\(\kappa_B > \kappa_V > \kappa_K\)) and scale matter more than matching these numbers exactly.
Important Note: Your exact escape fraction values will differ from others due to random stellar positions within the cluster. The relative ordering \((K > V > B)\) must always hold, but absolute values depend on where stars are placed relative to the box boundaries.
Computational Strategy
Grid Resolution Recommendations
| Phase | Grid | Packets | Purpose |
|---|---|---|---|
| Debugging | \(16^3\) or \(32^3\) | \(10^3\) | Fast iteration, algorithm validation |
| Development | \(32^3\) | \(10^4\) | See physical trends emerge |
| Analysis | \(64^3\) | \(10^5\) | Smooth statistics, final plots |
| High-resolution (optional) | \(128^3\) | \(\geq 10^6\) | If performance allows |
Required for final submission: \(64^3\) grid with \(\geq 10^5\) packets per band. Higher resolution (\(128^3\)) is optional if your code is fast enough.
Why this strategy: Start with low resolution to debug quickly. Increase resolution only after validation passes. The \(64^3\) grid provides sufficient spatial resolution for absorption maps while keeping memory and runtime manageable.
Performance Measurement
Before running \(10^6\) packets:
- Time your code with 1000 packets
- Calculate packets/second
- Estimate time for full runs
- If estimated time > 24 hours, profile your code to find bottlenecks (see Appendix A on using
cProfile)
Implementation Phases
Why This Progression?
You’re building one MCRT code and running it with increasing complexity to verify correctness at each step:
- Phase 1A (1 star, V-band): Verification against analytical solution \(f_{\text{esc}} = e^{-\tau}\). Any errors here are in your core algorithm.
- Phase 1B (5 stars, V-band): Verify luminosity-weighted emission works. Same transport code, just loop over sources.
- Phase 2 (5 stars, 3 bands): Full analysis. Same transport code, just loop over bands.
You are NOT implementing separate codes for each star or band. You’re running the same MCRT algorithm with different inputs. Once Phase 1A works, Phase 1B is adding a loop. Once Phase 1B works, Phase 2 is adding another loop.
The incremental approach helps you:
- Debug the core algorithm in isolation (Phase 1A)
- Verify multi-source handling before adding wavelength complexity (Phase 1B)
- Catch errors early when they’re easy to diagnose
- Have working baselines to compare against at every step
If you jump straight to 5 stars and 3 bands, you’ll have multiple interacting components (source selection, opacity calculation, transport algorithm) and no way to isolate which one is wrong. Get Phase 1A working first. Make plots. Verify against the analytical solution. Then proceed.
Phase 0: Core Infrastructure — Get Your Framework Working
Build your modular MCRT framework with proper project structure:
project3_mcrt/
├── src/
│ ├── __init__.py
│ ├── constants.py # Copy from Project 1, ensure CGS units
│ ├── star.py # Update from Project 1 with band luminosities
│ ├── zams.py # Copy from Project 1 (Tout et al. formulae)
│ ├── utils.py # Planck function, integration utilities
│ ├── dust.py # Draine opacity processing
│ ├── grid.py # 3D grid structure (CGS units)
│ ├── photon.py # Packet propagation
│ ├── transport.py # Main MCRT engine
│ ├── detectors.py # Escape tracking, observables
│ └── mcrt_viz.py # Visualization and plotting functions
├── data/
│ └── kext_albedo_WD_MW_5.5A_30_D03_all.txt
├── outputs/
│ ├── figures/
│ └── results/
├── docs/
│ ├── research_memo.md
│ └── growth_memo.md
├── tests/
│ └── test_validation.py
├── README.md
├── requirements.txt
└── run.py # CLI entry point (required)Important:
- Copy your
constants.py,star.py, andzams.pyfrom Project 1 - Update
star.pyto include band luminosity calculations - Ensure all units are CGS throughout your code
- Use numpy.random for random number generation
- During debugging, set seed for reproducibility:
np.random.seed(42) - Remove seed setting for final runs to ensure proper Monte Carlo statistics
- Your
run.pymust support:python run.py test,python run.py validate,python run.py make-figures
Create utils.py with:
Planck function (Blackbody intensity): \[B_\lambda(T) = \frac{2hc^2}{\lambda^5} \frac{1}{e^{hc/(\lambda kT)} - 1}\]
Integration wrapper using
scipy.integrate.simpsonBand integration function to compute luminosity fractions and dust opacities
Create mcrt_viz.py with:
- Functions for all required plots
- 2D projection maps
- SED plotting
- Convergence analysis plots
Validation: Verify grid indexing and coordinate transformations are correct
Phase 1: Single Band Implementation (V-band only)
Part A — Single Star at Center (START HERE)
This is your verification test. Get this working before anything else.
- Place one \(20~M_\odot\) star at box center (not random — reproducible!)
- Implement absorption-only transport for V-band only
- Start with constant opacity \(\kappa_V = 7300\) \(\mathrm{cm}^{2}\)/g for debugging — this isolates algorithm bugs from opacity calculation bugs
- Verify: Escape fraction falls within analytic bounds for uniform cube (see note below)
- Verify: Energy conservation to 0.1%
- Then implement Planck-weighted opacity and verify it gives a similar value (~7000-8000 \(\mathrm{cm}^{2}\)/g depending on stellar temperature)
Before proceeding to Phase 1B, you MUST have:
If these don’t pass, debug here. Do not proceed with broken code.
For a star at the center of a uniform cube, photon path lengths depend on direction: face-normal paths travel \(L/2\), while corner-bound paths travel \(L\sqrt{3}/2\). The escape fraction depends on direction: \(f_{\text{esc}}(\hat{n}) = e^{-\kappa \rho \, d(\hat{n})}\).
Your Monte Carlo naturally averages over all directions. To validate, verify that your result falls between the analytic bounds:
\[e^{-\kappa \rho L\sqrt{3}/2} \;<\; f_{\text{esc}}^{\text{MC}} \;<\; e^{-\kappa \rho L/2}\]
The face-normal value \(e^{-\kappa \rho L/2}\) is an upper bound (shortest path), and the corner value \(e^{-\kappa \rho L\sqrt{3}/2}\) is a lower bound (longest path). This is a great example of why Monte Carlo is valuable — it naturally handles what’s hard to integrate analytically.
Part B — All 5 Stars (still V-band only)
Now add source complexity — but the transport algorithm is unchanged.
- Add all 5 stars at random positions within the \(0.75^3\) \(\mathrm{pc}^{3}\) sub-volume
- Implement luminosity-weighted packet emission
- Verify: Energy conservation still holds (same 0.1% threshold)
- Generate V-band absorption map
Only the source selection. Your propagate_packet() function is identical. You’re just calling it with packets emitted from different stars, weighted by luminosity.
Phase 2: Multi-Band Analysis (B, V, K)
Now add wavelength complexity — but the transport algorithm is still unchanged.
- Add B-band and K-band calculations (different \(\kappa\), same algorithm)
- Compute Planck-weighted opacities for each star-band combination
- Run with \(10^5\) packets minimum per band
- Create output SED showing reddening
- Generate absorption maps for all bands
- Analyze differential extinction
Only the opacity values. Your propagate_packet() function is identical. You’re just running it three times with different \(\kappa\) values for each band.
The payoff: Once you see K-band escaping more easily than B-band, you understand why JWST can peer through dust that Hubble cannot. Same physics, different wavelengths — and your code demonstrates it.
Phase 3: Science Analysis and Extensions
- Perform convergence study with \(N_\text{packets} = 10^3, 10^4, 10^5, 10^6\) (attempt \(10^6\) if feasible)
Complete required outputs and implement extensions (see Project 3 Expectations for what each tier looks like).
Quick Sanity Checks
Before diving into analysis, verify your code with these tests:
Empty box test: Set \(\rho_\text{dust} = 0\) everywhere \(\to\) 100% of packets should escape
Extreme opacity test: Set \(\kappa \times \rho_\text{dust}\) very high \(\to\) nearly 0% escape (Note: high opacity reduces runtime during debugging since packets will be absorbed quickly/close to sources)
Energy conservation: \(|L_{in} - (L_{abs} + L_{esc})| / L_{in} < 0.001\) for all runs (note this may be higher for very low packet counts due to Monte Carlo noise and grid resolution).
Statistical convergence: Error should scale as \(1/\sqrt{N_{packets}}\)
Single star centering: Star at origin in uniform cube should give \(f_{\text{esc}}\) within analytic bounds (see Phase 1A geometry note)
Common Debug Checks
If your results seem wrong:
Verify \(\tau = -\ln(\xi)\) not \(+\ln(\xi)\)
Check you’re using \(\rho_\text{dust}\) not \(\rho_{gas}\)
Confirm packets that escape are counted in \(L_\text{escaped}\)
Test with one packet and print every step
Verify your random number generator is working (should give uniform distribution)
Required Outputs & Analysis for Your Research Memo
- Opacity Validation Plot:
Validate your opacity calculations from the Draine data:
- X-axis: Wavelength
- Y-axis: Mass absorption coefficient \(\kappa\)
- Show the Draine opacity curve for \(R_V = 5.5\)
- Overlay vertical shaded bands for B, V, and K filters
- Include points (
plt.scatter) or horizontal lines (plt.axhline) showing calculated band-averaged opacities
- Convergence Analysis:
Plot \(f_{\text{esc}}\) vs \(N_{\text{packets}}\) for each band:
X-axis: Number of packets (\(10^3\) to \(10^6\), log scale)
Y-axis: Escape fraction
Three curves: B-band, V-band, K-band
Include \(1/\sqrt{N}\) reference line
- Spectral Energy Distribution
Show how dust reddens the star cluster:
- X-axis: Wavelength \(\lambda\) (mark B, V, K centers)
- Y-axis: \(\lambda L_\lambda\) / max(\(\lambda L_\lambda\))
- Two curves:
- Intrinsic: Combined stellar emission (sum of all 5 stars’ Planck functions)
- Observed: Escaped light after dust extinction (from MCRT simulation)
- Divide both curves by the maximum value of the intrinsic curve
- The observed curve should be suppressed at blue wavelengths relative to red
- Spatial Absorption Maps (2D Projections)
Create absorption maps for each band (B, V, K) by integrating through the full box:
\[\sum_z L_{\text{absorbed, band}}(x,y,z)\]
Use log color scale to show dynamic range, choose your limits wisely.
- Escape Direction Map (optional)
2D histogram showing the angular distribution of escaping light:
- Use spherical coordinates \((\theta, \phi)\) for packet escape directions
- Create 2D histogram with \(\theta \in (0, \pi)\) and \(\phi \in (0, 2\pi)\) bins (you’ll need to convert Cartesian escape directions to spherical)
- Color shows escaped luminosity per solid angle
- Should Reveal anisotropies due to stellar positions within the box
6. Data Table
| Quantity | B-band | V-band | K-band |
|---|---|---|---|
| Band-averaged opacity \(\kappa\) (cm\(^2\)/g) | |||
| Input luminosity (\(L_\odot\)) | |||
| Escaped luminosity (\(L_\odot\)) | |||
| Escape fraction | |||
| Mean optical depth |
Analysis Discussion for Research Memo
Your memo should address both the physics and computational aspects. These topics are suggestions to help guide your analysis - pursue the aspects you find most interesting in your results.
Physics Results:
- How dust extinction creates spectral reddening in your SED
- Why escape fractions follow K > V > B (connect to dust opacity physics)
- Role of stellar position and luminosity in determining escape probabilities
Numerical Analysis:
- Convergence behavior and verification of \(1/\sqrt{N}\) Monte Carlo error scaling
- Energy conservation accuracy achieved
- Computational performance and any optimizations implemented
Interpretation:
- What your results reveal about observing embedded clusters
- How different parameters (density, dust model, inhomogeneous medium) would affect outcomes
Critical Implementation Notes
Key Points for Success
Use dust density: \(\rho_{dust} = \rho_{gas} \times f_{dust-to-gas}\) where \(f_{dust-to-gas} = 0.01\)
Correct sign: Sample \(\tau = -\ln(\xi)\) (negative sign critical!)
Energy tracking: Maintain separate counters for absorbed and escaped luminosity
Boundary checking: Test all 6 box faces for packet escape
Packet luminosity: All packets carry \(L_{packet} = L_{total}/N_{packets}\) regardless of source
CGS units: Ensure all calculations use CGS units consistently
CGS Unit Conversions
- 1 pc = \(3.086 \times 10^{18}\) cm
- 1 \(R_\odot\) = \(6.96 \times 10^{10}\) cm
- 1 \(L_\odot\) = \(3.828 \times 10^{33}\) erg/s
- 1 nm = \(10^{-7}\) cm, 1 \(\mu\)m = \(10^{-4}\) cm
Common Pitfalls to Avoid
- Using gas density instead of dust density (100% — error!)
- Sign error in tau sampling (packets never absorb)
- Lost packets at boundaries (energy not conserved)
- Variable packet luminosities (breaks statistics)
- Not integrating opacities properly (missing temperature weighting)
- Wrong units in wavelength conversion (nm/\(\mu\)m to cm)
Validation Requirements
Before analyzing results, your code MUST pass:
Test 1: Analytic Bounds
Single star at center, uniform medium in a cube of side \(L\):
- Face-normal upper bound: \(f_{\text{esc}}^{\text{max}} = \exp(-\kappa \rho L/2)\)
- Corner lower bound: \(f_{\text{esc}}^{\text{min}} = \exp(-\kappa \rho L\sqrt{3}/2)\)
- Your MC result must fall between these bounds within \(3\sigma\) Monte Carlo error
- Where \(\sigma = \sqrt{f_{\text{esc}}(1-f_{\text{esc}})/N_{\text{packets}}}\)
Test 2: Energy Conservation
\[\left|\frac{L_{in} - (L_{abs} + L_{esc})}{L_{in}}\right| < 0.001\]
Test 3: Convergence Scaling
Standard error \(\propto N^{-1/2}\) for \(N \in [10^3, 10^6]\)
Suggested Timeline & Milestones
To ensure steady progress and avoid last-minute rushing, aim for these checkpoints:
By End of Week 1 (Mar 10)
✓ Phase 0 complete: Project structure set up, P1 code copied ✓ Phase 1A working: Single star, V-band, constant opacity ✓ Validation passed: \(f_{\text{esc}}\) within analytic bounds for uniform cube ✓ Validation passed: Energy conservation < 0.1% ✓ Convergence plot showing \(1/\sqrt{N}\) scaling ✓ At least 3-4 meaningful Git commits
By End of Week 2 (Mar 17)
✓ Planck-weighted opacity calculation implemented and verified ✓ Phase 1B working: All 5 stars, V-band ✓ Phase 2 working: All 3 bands (B, V, K) ✓ V-band absorption map generated ✓ Differential extinction observed (K > V > B escape fractions)
By End of Week 3 (Mar 24)
✓ Convergence analysis complete (\(10^3\) to \(10^5\) packets) ✓ SED plot showing reddening ✓ Absorption maps for all 3 bands ✓ Data table with all required quantities ✓ Both memos written ✓ Final code review and cleanup ✓ Submit by 11:59 PM
Implementation Tips
Before you begin coding, these practical tips will save you hours of debugging:
Performance & Efficiency:
When creating each star, compute and store \(\langle\kappa\rangle_{\text{band,star}}\) for each band as a star attribute for quick lookup during packet emission - don’t recalculate millions of times!
Pre-calculate and store grid boundaries (
x_min,x_max, etc.) and cell size (Δx) as attributes to avoid recalculating them during packet propagationSave intermediate results (e.g., after every \(10^4\) packets) to avoid losing progress if your code crashes during long runs - Monte Carlo results are additive
Debugging Strategy:
Start with
kappaas a simple constant (e.g., 30000 \(\mathrm{cm}^{2}\)/g) before implementing the full Draine integration to isolate algorithmic bugs from opacity calculation bugsDuring debugging, use
np.random.seed(42)for reproducible results. Remove the seed for final runs to ensure proper Monte Carlo statisticsIf your calculated opacities differ significantly from expected ranges, check: (1) wavelength unit conversion to cm, (2) correct reading of dust/H mass from header, (3) proper Planck weighting with stellar temperature
Numerical Robustness:
Use DIFFERENT random numbers for position sampling (\(\xi_1\)‚ \(\xi_2\)) and direction sampling \(\xi_3\)‚ \(\xi_4\)) - reusing random numbers creates unwanted correlations
Handle the special case where a direction component is exactly zero (e.g., dx = 0) by setting the distance to that boundary to infinity rather than dividing by zero
Always check that
d_next> 0 before moving packet - negative or zero distances indicate a bug in boundary calculations
Getting Started
- Review Module 4 lecture notes
- Copy your
constants.py,star.py, andzams.pyfrom Project 1 - Start simple: Single packet, single star, uniform medium
- Build incrementally: Add complexity only after validation
- Test constantly: Every new feature needs a test
What Success Looks Like
Appendix A: Using Python’s cProfile for Performance Analysis
What is cProfile?
cProfile is Python’s built-in profiler that measures where your code spends time. It tracks every function call and measures execution time, helping identify bottlenecks.
Basic Usage
Method 1 - Command line:
python -m cProfile -s cumtime run.py make-figures > profile_output.txtThe -s cumtime sorts by cumulative time spent in each function.
Method 2: In your script:
import cProfile
import pstats
# Profile a specific function
profiler = cProfile.Profile()
profiler.enable()
# Your MCRT code here
results = run_mcrt(n_packets=1000)
profiler.disable()
# Print statistics
stats = pstats.Stats(profiler)
stats.sort_stats('cumtime')
stats.print_stats(20) # Show top 20 functionsInterpreting Output
ncalls tottime percall cumtime percall filename:lineno(function)
1000 0.234 0.000 45.678 0.046 transport.py:45(propagate_packet)
128000 12.345 0.000 12.345 0.000 grid.py:23(get_cell_index)
- ncalls: Number of times function was called
- tottime: Time spent in this function (excluding sub-functions)
- cumtime: Total time in function (including sub-functions)
- percall: Time per call
What to Look For
- Functions with high cumtime - these are your bottlenecks
- Functions called millions of times with small percall time - consider vectorization
- Unexpected functions taking significant time - may indicate bugs
Example Optimization Workflow
# Before optimization: profile your code
# Identify that get_cell_index takes 30% of runtime
# Optimize that specific function
# Re-profile to verify improvementRemember: Profile before optimizing! Don’t guess where the bottlenecks are.
Appendix B: Parallel Processing for Bands
Optional Speedup: Band Parallelization
Since B, V, and K bands are independent, you can run them simultaneously for ~3\(\times\) speedup using Python’s multiprocessing module (click here for docs).
Key concept: Each band runs as a separate process with its own random seed and absorption map. After all bands complete, combine the results.
Considerations:
- Each process needs a different random seed for independent Monte Carlo sampling
- Return results from each process rather than modifying shared objects
- For small test runs (<10^4 packets), parallelization overhead may make it slower
Starting point: Look into multiprocessing.Pool and its map() function to distribute bands across cores.
This optimization is entirely optional - focus on getting correct physics first.
Appendix C: Troubleshooting Checklist
If your results seem incorrect, check these common issues:
- f_esc = 100% for all bands: You’re using rho_gas instead of rho_dust
- f_esc = 0% for all bands: Sign error in tau sampling (should be -ln(xi))
- Energy off by factor of 100: Not applying dust-to-gas ratio (f_D/G = 0.01)
- No convergence with N: Variable packet luminosities (all packets must carry L_total/N)
- Packets disappear: Not tracking both absorbed AND escaped packets
- Code extremely slow: Profile first with cProfile before optimizing
“Monte Carlo: Solving intractable integrals by rolling dice since 1946”