Apple Event – September 2019
I have been using Apple phones since the era of the iPhone 4 (2010), but only rarely put the camera to use. However, the improvements to the cameras within the iPhone X (2017) resulted in me using the feature more routinely – particularly when doing ‘serious’ photographic work as a means of capturing the true colour and saturation of a location.
Content with my iPhone X, I was confident that this evening’s Apple Event would hold nothing of interest for me: the iPhone 11 came and went offering little more than that which is found on the iPhone X. The reveal of the iPhone 11 Pro did highlight a some stand-out features that were appealing – all centred around a the addition of a third camera, with an ultra-wide lens (0.5x zoom) and an improved telephone lens with a half stop improvement in low light gathering ability (now f/2.0 compared with the previous f/2.4 aperture).
Beyond the improved hardware, impressive developments have gone on in the supporting software. Night Mode uses adaptive bracketing to capture and fuse multiple exposures with different shutter speeds. This helps reduce blur with moving subjects and reduces noise in the static scene elements.
Not available until after the launch is Deep Fusion. Hailed as Apple’s AI photography, this is a feature takes 9 photographs in quick sequence (including one long-exposure shot). The device’s neural engine then analyses the collection to create an optimal end photo that borrows the best elements of each image, assembling it “pixel-by-pixel” into a higher resolution 24MP image. To me, this sound akin to exposure blending and might prove to be a sensational tool for capturing the wide dynamic range found in stained glass.
I definitely do not need a new phone, but I am tempted: the combination of improved optics together with software features that might come close to mimicking my research, piques my interest!