After flirting with ideas for months, I finally picked a CUDA pet project. Let’s see if I have the free time to build it…
An optics simulator! i.e. the kind of stuff Nikon/Canon use. It’s really a perfect application for CUDA.
I want to take a blueprint for a photographic lens and output a TIFF file of a sensor perceiving a test chart through the virtual lens. More specifically I want to predict MTF and chromatic aberrations from a sufficiently detailed optical design plan.
Wavelength-aware: this is essential. I need to model chromatic aberrations correctly, so I need to cast rays by nanometer wavelength and not RGB color.
Ray-tracer: this isn’t as fancy as the term normally implies, considering the very restricted set of things my rays can hit, almost every ray crosses every element in the order in which they’re laid out.
Bayer CFA: at the back of the lens I will model a grid of color-filtered pits in a classic bayer array, and rebuild a full-color image for final output (doing de-mosaicing in a final pass).
I plan to start with simulating the Pierre Angenieux 100mm f/1 that NASA used on Apollo fly-bys of the moon.
This week I completed phase 1. which was (a) computing the missing lens dimensions from the raw specification (took a couple of nights to get this right) and (b) visualizing the design in OpenGL (essential to debug the rays when I start casting them about)…
Before I embark on phase 2 (CPU rays), I’ll prepare my debugging abilities by improving the visualization and including the test chart and sensor in the view. Then come CPU rays.
After that as many CUDA rays as my G86 laptop can throw at it. Or whatever beefy box I can find at the office... for a deluge of rays.