Well, let’s set the tone of this journal with something appropriately geeky and incomprehensible. It’s a cute story, skip to the end if your eyes start to gloss over. I’ll forget it if I don’t write it down.

A couple weeks ago, while I was at the LA SIGGRAPH February meeting, I was struck with this interesting idea for facial/performance capture on CG models. Essentially, you would get a decent scan of someone’s face using a micron scanner, then you would record their facial performance from several angles using HD cameras. You would then input the footage into boujou or Realviz Matchmover and track their face, thus constructing a 3d point cloud about their face. You would match each point in the point cloud onto a 2d point on a UV texture map wrapped about the CG micron scan head, and then project data from the point cloud onto the UV map.
What this would get you:
– an animated displacement map, using the RGB channels of each pixel in the UV map to represent the XYZ vectors of the points in the point cloud for -that frame-
– the magnitude of the move in each frame, stored either in an alpha channel, or, more likely, an accompanying 16-bit image file, tapered off by a magnifier
– real-world data on the actor’s face appropriately mapped onto a deformed CG model
The displacement would have to be aggregated from frame to frame, building on top of itself, so you would need some sort of blending with manual keyframes, as it would no doubt get skewed the further into the sequence it got.

So I explained this complicated idea to my friends on the car ride home.
They informed me that I had, in great detail, without having ever read the papers, described The Matrix’s Universal Capture technique.

…so I’m a genius, just a little bit slow.