The iPhone SE 2’s camera setup is going to lean on Apple’s software
The iPhone SE 2’s camera setup is going to lean on Apple’s software

Apple’s newly announced second-generation iPhone SE (or iPhone SE 2 as we’ll be calling it) is the company’s first phone with a single rear camera since 2018’s iPhone XR. While its camera hardware is basic, the new $399 device is powered by last year’s A13 Bionic chip, so it will benefit from Apple’s more recent work on image processing. It’s a combination that could reveal how much of Apple’s photography strengths lie in hardware and how many lie in software.

From a hardware perspective, the iPhone SE 2 has a relatively simple setup. On its rear is a single 12-megapixel f/1.8 camera, which, from a numbers perspective, is exactly the same as what could be found on the back of the iPhone XR. It’s also similar to the 2017 iPhone 8 that the SE 2’s appearance so closely imitates. That means there’s no extra 12-megapixel ultrawide camera like the iPhone 11, and there’s no 12-megapixel telephoto camera like the 11 Pro. Around the front, there’s a 7-megapixel selfie camera, which is, again, the same resolution as the cameras in the 8 and the XR.

Software is going to have to do some heavy lifting

While the hardware is a little old-fashioned, the software side of the photography equation is more interesting. As previously mentioned, the iPhone SE 2 is powered by Apple’s A13 Bionic chip, which made its debut on last year’s iPhone 11 lineup. Apple says it’s using this chip’s image signal processor and the Neural Engine to offer a host of software-based improvements to the SE 2’s photographs.

The most significant of these is the fact that the phone is now using a more advanced version of its Smart HDR algorithm that debuted with last year’s iPhone 11 lineup. “Semantic rendering,” as the company calls it, is designed to recognize individual elements within an image in order to light them properly.

Essentially, Smart HDR works by taking a series of underexposed frames and an overexposed frame and uses them to grab additional detail where it thinks the image needs it. Then Semantic rendering alters the photo based on what’s in it. It might sharpen hair, for example, de-noise the sky, or light a face more evenly. You can read more about the technology in our iPhone 11 Pro review from last year, and it should mean photographs are much more evenly lit.

Compared to the iPhone 8, the SE 2 is also unique in being able to create portrait mode photographs with just a single lens. (The iPhone 8 Plus had the feature because it had dual cameras.) The iPhone XR had this same feature, and in our review, we actually preferred it to how that year’s XS generated its portrait mode shots since the XR’s main camera had a wider field of view and aperture. Since the SE 2’s camera has that same wide f/1.8 aperture as the XR, we’ve got high hopes. The same technology is also available on the front-facing camera to create portrait mode shots from its single lens.

In other areas, however, the camera technology found in the iPhone SE 2 has been around for much longer. Optical image stabilization, a 6-element lens, a flicker sensor, 4K 60fps video recording, slo-mo, and cinematic stabilization could all be found in the iPhone 8. Then again, this is Apple’s budget device, so it’s not surprising that it’s not breaking much new ground.

There are inevitably going to be areas where the iPhone SE 2’s hardware will be a limitation. For example, there’s no wide-angle lens here, and no amount of software processing is going to be able to capture data from outside of the main camera’s field of view. But in those cases where you’d normally only rely on the phone’s main camera, Apple’s updated software could do a lot more of the heavy lifting, and we’ll get our best look yet at how far it’s come.