October 9, 2007 weblog
Focus images instantly with Adobe鈥檚 computational photography
Lisa Zyga
contributing writer
Adobe has recently unveiled some novel photo editing abilities with a new technology known as computational photography. With a combination of a special lens and computer software, the technique can divide up a camera image in different views and reassemble them with a computer.
The method uses a lens embedded with 19 smaller lenses and prisms, like an insect鈥檚 compound eye, to capture a scene from different angles at the same time. As Dave Story, Vice President of Digital Imaging Product Development at Adobe, explained, this lens can determine the depth of every pixel in the scene.
This means that, after the photo is taken and transferred to a computer, people can edit certain layers of the photo within seconds. If a user wants to eliminate the background, the new software can simply erase everything in the image that appears at or beyond a certain distance.
Further, people can use a 3D focus brush to 鈥渞each into the scene and adjust the focus,鈥 Story explained during a news conference, in a video posted by Audioblog.fr. At the conference, he uses the focus brush to bring a blurry statue in the foreground of an image into focus simply by dragging the tool over the area on the image. Alternatively, he switched to a de-focus brush to bring a second statue located further back in the image out of focus.
鈥淭his is something you cannot due with a physical camera,鈥 he said. 鈥淭here鈥檚 no way to take a picture with just this section in focus and everything else out of focus. It鈥檚 not physically possible to make a camera that does that. But with a combination of that lens and your digital dark room, you have what we call computational photography. Computational photography is the future of photography.鈥
Knowing the 3D nature of every pixel also enables people to view photos from different angles after they are taken, which Story demonstrated. Months after a photo is snapped, people can 鈥渕ove the camera鈥 as if traveling through a scene in Google Earth. Story suggested that this ability would be useful if background objects were accidentally aligned in undesirable positions, such as a lamp post appearing to stick straight out of a person鈥檚 head. In that case, you could rotate the image slightly to one side, in order to view the scene from a different angle.
鈥淲e can do things that people now have to do manually, much more easily,鈥 Story said. 鈥淏ut we can also use computational photography to allow you to accomplish physically impossible results.鈥
via
Written for you by our author 鈥攖his article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a (especially monthly). You'll get an ad-free account as a thank-you.