Project 3: Expectations & Grading

COMP 536 | Short Projects

Author

Dr. Anna Rosen

Published

April 22, 2026

How This Project Is Graded

There is no point-by-point rubric. Your grade reflects what you demonstrated you can do and how well you did it.

The tiers below describe the scope of work — which phases you completed and what outputs you produced. But scope alone doesn’t determine your grade. Within each tier, quality matters: correctness of results, clarity of code, rigor of validation, thoughtfulness of your research memo, and care in your figures. The tiers set a ceiling, not a guarantee.

To put it plainly:

  • Completing Phase 2 with sloppy code, broken validation, and a superficial memo is not B work.
  • Completing Phase 1A with meticulous validation, clean code, and a thoughtful memo that demonstrates real understanding is solid C work — and might earn higher.
  • Correctness always comes first. Code that produces wrong answers is worth less than code that produces correct answers for a simpler problem.

A note on the research memo: this is a computational science course, not an astrophysics course. You are not expected to arrive with deep physics knowledge. You are expected to explain the methods you used, show that you understand what your code is doing, and interpret your results in physical terms. If your escape fractions come out \(K > V > B\), I want to see that you can explain why — not at the level of dust grain cross-sections, but at the level of “shorter wavelengths interact more strongly with dust, so blue light gets absorbed more.” Demonstrate understanding. Show your reasoning. That’s what the memo is for.

The tiers are cumulative: B-level scope includes everything in C, and A-level scope includes everything in B.


C — Satisfactory

Scope: Phase 1A — the core MCRT algorithm, validated.

This means:

  • Single star at box center, V-band, constant opacity (\(\kappa_V = 7300\) cm\(^2\)/g)
  • Escape fraction matches analytical solution \(f_\mathrm{esc} = e^{-\tau}\) within Monte Carlo error
  • Energy conservation verified: \(|L_\mathrm{in} - (L_\mathrm{abs} + L_\mathrm{esc})|/L_\mathrm{in} < 0.001\)
  • Convergence plot showing error scales as \(1/\sqrt{N}\)
  • Code is modular, runs without errors, and uses CGS units throughout
  • Research memo describes your algorithm, shows validation results, and explains the physics
  • Growth memo submitted

What this signals: You built a working MCRT code and proved it gives the right answer. The hardest part — ray marching through a 3D grid, accumulating optical depth, and correctly handling boundaries — is done.

What moves you within this tier: Quality of your validation evidence (do you show the analytical comparison quantitatively, or just claim it works?). Clarity of your code organization. Whether your memo explains what the algorithm does and why the results make sense, or just describes the steps you took.


B — Good

Scope: Phases 1A + 1B + 2 — multi-source, multi-band, real dust physics.

Everything in C, plus:

  • Phase 1B: All 5 ZAMS stars with luminosity-weighted packet emission (V-band)
  • Phase 2: All 3 bands (B, V, K) with Planck-weighted opacities from the Draine data
  • \(\geq 10^4\) packets per band with results that show clear physical trends
  • Required plots:
    • Opacity validation (Draine curve with band-averaged values marked)
    • Convergence analysis (\(f_\mathrm{esc}\) vs. \(N_\mathrm{packets}\) for each band)
    • SED comparison (intrinsic vs. observed)
    • Absorption maps for all 3 bands (2D projections)
  • Data table with opacities, luminosities, escape fractions, and mean optical depths
  • Research memo interprets your results: why do the escape fractions differ across bands? What does the SED comparison tell you about how dust changes observed starlight? How do stellar positions affect which stars are more or less extincted?

What this signals: You can take a validated algorithm and deploy it on a realistic problem. You can work with real data (Draine opacities), implement non-trivial methods (Planck-weighted averages), and interpret your computational results.

What moves you within this tier: Are your figures clear and well-labeled, or hastily thrown together? Does your memo explain what you found and why it makes physical sense, or just narrate your workflow? Is your opacity calculation correct (Planck-weighted, not flat-averaged)? Do your results pass all validation tests, or are there issues you’re ignoring?


A — Excellent

Scope: Everything above, executed with care and scientific depth.

Everything in B, plus:

  • \(\geq 10^5\) packets per band (smooth statistics, converged results)
  • Code is well-documented, efficient, and cleanly organized
  • Convergence study across \(N = 10^3\) to \(10^5\) (or \(10^6\) if feasible)
  • Research memo goes beyond describing results — it demonstrates deeper understanding:
    • Why does dust affect short wavelengths more than long wavelengths? (Connect opacity to wavelength.)
    • What do your absorption maps tell you about how light propagates through the medium?
    • What are the limitations of your simulation? What physics did you leave out, and how would including it change the results?
    • If you want to go further: connect to real observations (e.g., why infrared telescopes can see through dust that optical telescopes cannot)

What this signals: You understand both the computational methods and the physical system well enough to reason about your results, not just report them.

What separates a low A from a high A: The depth of your reasoning. Do you just state that K-band escapes more, or do you explain why in terms of how opacity depends on wavelength? Do your figures tell a coherent story, or are they six disconnected plots? Is your code something a collaborator could pick up and use, or would they need to reverse-engineer it?


Beyond A — Extensions

For students who want to push further, any of these demonstrate additional depth:

  • \(10^6\) packets with performance profiling (what’s the bottleneck?)
  • Escape direction maps showing angular anisotropy
  • Band parallelization with multiprocessing
  • \(128^3\) grid resolution
  • Varying dust density or comparing \(R_V = 3.1\) vs. \(R_V = 5.5\) models (both Draine files are provided)
  • Inhomogeneous density fields (e.g., a clumpy medium)
  • Your own idea — the best extensions come from genuine curiosity

Extensions are required for graduate students and optional (but encouraged) for undergrads. See the Project Submission Guide for details.


How to Succeed (and How Not to Fail)

You have three weeks for this project. That is not an accident — it reflects the real complexity of building, validating, and analyzing a 3D Monte Carlo simulation. Use the time. Don’t compress three weeks of work into the last three days.

A reasonable pace looks like:

  • Week 1: Phase 0 + Phase 1A. Grid, photon emission, ray marching, single-star validation. This is where you build and debug the core algorithm.
  • Week 2: Phase 1B + Phase 2. Multiple stars, multiple bands, Planck-weighted opacities, required plots. This is where you add physical realism.
  • Week 3: Analysis, convergence study, research memo, polish. This is where you turn code into science.

If you start week 3 without a working Phase 1A, you are in trouble. The project is designed so that the hardest debugging happens early (ray marching, cell boundary crossing), and later phases are mostly configuration changes. But only if you follow the progression.

Start Phase 1A the day the project is assigned. Not the day after, not next week. The constant-opacity single-star case doesn’t require reading the Draine file, computing Planck integrals, or handling multiple sources. It requires building the grid, emitting packets, and propagating them. That’s it.

If you are stuck, ask for help. Come to office hours. Post on Slack. Talk to classmates. The single most common failure mode in this course is students who struggle alone in silence and run out of time. If you are stuck on Phase 1A after 5 days, that is a signal to come talk to me — not a signal to keep trying the same thing. I would rather help you get unstuck in week 1 than grade a broken submission in week 3.

Use the validation gates. If your \(f_\mathrm{esc}\) doesn’t match \(e^{-\tau}\), your code has a bug. Find it before adding more complexity. If energy isn’t conserved, you’re losing or double-counting packets. The validation tests exist to catch problems when they’re easy to diagnose.