The reason your phone captures decent-to-amazing photos is computational photography. Overcoming the small sensors and tiny lenses is a difficult feat, so Apple, Google, and others have invested heavily into software (and hardware such as dedicated image co-processing chips) to interpret the data. A single recording of light hitting the image sensor provides a limited amount of information.
But capture a dozen versions at different exposures and while identifying components of the scene and adjusting appearance based on those—well, then you’ve got a pretty good image!
The problem so far has been that while you gain the quality of a computationally composed photo, you lose the ability to edit the raw image data. Like a JPEG, which is the camera software’s interpretation of the data, a photo captured using these technologies is essentially “burned in.” Oh yes, you can edit it, but not with the same latitude as editing a raw file.
And that’s where Apple ProRAW format comes in. The rendered, processed image created by the iPhone’s amazing AI features is smashed together with the unedited raw data captured by the sensor.
In my latest Smarter Image column for Popular Photography, I dig into ProRAW and how it all works: Testing the advantages of Apple’s ProRAW format. It’s fascinating stuff, because it merges two very different types of images into a hybrid that has pretty specific editing capabilities that depend on the software you use.