Apple calls this special feature "Deep Fusion" and it is a brand new way of taking pictures where the Neural Engine inside the Apple A13 chip uses machine learning to create the output image.
"Deep Fusion" will arrive with a software update on iPhone 11 series this fall
The result is a photo with a stunning amount of detail, of great dynamic range and with very low noise. The feature uses machine learning and works best in low to medium light.
Phil Schiller, Apple's chief camera enthusiast and also head of marketing, demonstrated the feature with a single teaser picture and explained how it works.
How "Deep Fusion" works:
- it shoots a total of 9 images
- even before you press the shutter button, it has already captured 4 short images and 4 secondary images
- when you press the shutter button, it takes 1 long exposure photo
- then in just 1 second, the Neural Engine analyzes the fused combination of long and short images, picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise
It is truly the arrival of computational photography and Apple claims this is the "first time a neural engine is responsible for generating the output image". In a typical Apple fashion, it also laughingly calls this "computational photography mad science". Whichever definition you pick, we can't wait to see this new era of photography on the latest iPhones this fall (if you are wondering exactly when, there are no specifics, but practice shows it will likely be the end of October).
ncG1vNJzZmivp6x7sbTOp5yaqpWjrm%2BvzqZmp52nqHyFscSpZH%2Bto568r3nIiZ%2BoppVifnJ5wpqknqqRYrOmrdOuqZ5lla29ra3Ip5ydl5mZfnKEl25n