Apple also highlighted computational photography last month when it introduced three new iPhones. One yet-to-be released feature, Deep Fusion, will process images with an extreme amount of detail.
The big picture? When you take a digital photo, you’re not actually shooting a photo anymore.
“Most photos you take these days are not a photo where you click the photo and get one shot,” said Ren Ng, a computer science professor at the University of California, Berkeley. “These days it takes a burst of images and computes all of that data into a final photograph.”
Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.
Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.
Google gave me a preview of its Pixel phones last week. Here’s what they tell us about the software that’s making our phone cameras tick, and what to look forward to. (For the most part, the photos will speak for themselves.)