Skógafoss is a must-see, everyone said, and in a country with as much natural beauty as Iceland, that means it’s gonna be some majestic shit. The waterfall didn’t disappoint, but even on a chilly weekday in March, the place was mobbed. My photos — and c’mon, you have to take photos — were littered with tourists in puffy coats, all taking their own pictures of the falls. Not exactly the stuff you print and hang on the wall.
A couple of months later, I watched a picture of the same waterfall appear onscreen at Google’s annual developer conference as the company demonstrated its new AI-powered Magic Editor. In this version, a puffy-coated tourist stood in front of the falls on a gray day. CEO Sundar Pichai talks through the edits happening onscreen. The skies turn blue, the woman is repositioned, her outfit is touched up. Pichai remarks on using the tool to “get rid of some clouds” and brighten up the scene to make it look “as sunny as you remember.”
The thing about photos is that they’re unforgiving
Phone makers have been emphasizing blue skies and smooth skin for years, but the latest batch of Pixels, Galaxies, and iPhones are using AI to take photo manipulation further than ever. Magic Editor on the Pixel 9 includes a new option called “reimagine.” Now, instead of just erasing or moving objects around, you can add stuff with nothing more than a text prompt. Replace the background; sprinkle in some rainbows and butterflies. It’s in service of creating the scene you remember, not depicting the scene exactly as you saw it, or so Pixel camera PM Isaac Reynolds told Wired earlier this year. They’re memories, not photos.
The thing about photos is that they’re unforgiving. Photos tend to show us the stuff we would otherwise tune out — the stuff we’ve learned to ignore in person turns up in images as glaring distractions. In one of my favorite photos of my son from the past year, he’s sitting on our living room couch wearing a firefighter hat, laughing hard at something with pure joy on his face.
But there’s the problem of our couch, which is quite literally falling apart after withstanding years of cat clawings. My brain tuned out the frayed upholstery a long time ago, but when I look at that photo, my eye immediately goes to the imperfections. Being able to clean up the distractions and keep the focus on the subject is pretty appealing, even to a photo purist like me. But at that point, can you really call it a photo?
When photos are vibes, then you can add whatever the hell you want to them
The whole thing gets incredibly messy the more you try to pin down a definition of a photo. But as an exercise, I decided to put the tools to the test and force myself into this mindset of capturing memories and not photos. Would I like the images I came up with? Would I be comfortable even calling them photos? I took a Pixel 9 Pro along on some family adventures over a weekend, shot even more than I normally would as the family photographer, and then spent some time editing them in Google Photos using these AI tools. And for kicks, I edited a few older photos to see if AI could, in fact, bring my photos closer to my memories.
When photos are vibes and not photos anymore, then you can add whatever the hell you want to them — as long as it serves the memory. Maybe I don’t have much of an imagination, or maybe the “reimagine” tool is just a bridge too far for this exercise, but I had a hard time finding use cases for it. I added a flock of birds to the sky in a picture of my kid running through a field. It looks plausible and nice, I guess. But if it’s a good photo, that’s probably because the light was really nice and my kid has a big smile on his face — not because of the AI birds.
I switched gears and focused on using the less nihilistic tools in Google Photos’ AI toolbox. During the “taking photos, er memories” portion of the exercise, I’d pushed myself to take shots that I’d normally skip: a stranger lingering in the frame or a lot of clutter in the background. And you know what? AI is really good for fixing that stuff.
In the framework of recording “memories,” I took stuff out of my images that didn’t serve my memory of the scene. It was hard at first to even see some of these things when I looked at my photos; they looked how they looked because that’s what was there. But what about the black strap of a diaper bag on my husband’s shoulder? I guess I don’t specifically remember it because it’s something one of us is always carrying around on our outings. With a few taps, it’s gone from the image, and I’d never know the difference.
And while taking a whole crowd of people out of my waterfall scene didn’t improve the memory-ness of my photo, I found Magic Editor a lot more useful for taking one or two people out of a frame. Three-year-olds are squirrely little people who don’t usually cooperate if you’re waiting for the perfect moment to take a photo — that includes waiting for someone to move out of the background of my shot. Magic Editor has convinced me that I can take those photos anyway and just remove people after the fact.
You can’t edit a bad photo into a good photo
It’s also possible to take cleanup too far. I removed a trash can from the background of a picture of my son at a playground. But why stop there? I used reimagine to take out a building housing some restrooms and replace it with trees. The resulting photo is completely convincing, and sure, the public bathrooms aren’t a core part of my memory of that day. But the photo is also more generic, more boring. We could be at any playground in the greater Seattle area or maybe the country. I put the building back.
That’s been my overwhelming impression throughout this experiment. There’s plenty of stuff I don’t mind taking out of a photo, like the crusted snot under my kid’s nose or mouse turds littering the sidewalk where he’s standing. But it’s very easy to cross a line and take so much out of an image that you actually remove the context and imperfections that gave it character.
This exercise has also confirmed my belief that you can’t edit a bad photo into a good photo. Trust me, I’ve spent hours trying over the years. But it’s true of traditional photo editing and true of wacky AI tools; good lighting and thoughtful composition will do much more for a photo than any amount of tinkering in Photoshop.
I took out the people, added a more prominent rainbow in the mist, and made an average photo unbearably cheesy in the process.
That photo I took of Skógafoss — the one I tried to clean up with AI — is actually my least favorite photo from that day. After I took that “here’s the famous thing” photo, I’d walked around trying to find some different angles and interesting details to point my camera at. A pair of birds nested into a mossy rock face, a woman with a red coat and pants posing for her camera on a selfie stick, a view of the blue sky taken through the mist from close to the base of the falls: they’re the photos I like best from that day. They’re the pictures that best capture the memory of what it felt like being there.
Do I think there’s a place for AI photo editing tools? For sure. I’m frankly blown away by how much better Magic Eraser is in the AI era, and I think I’ll actually take more photos I’d otherwise write off knowing I can remove an obvious distraction afterward. But AI doesn’t change the fundamentals of a good photo, at least for me. There’s still no substitute for being there, good lighting, or catching the right expression on your subject’s face. After all, what’s a memory if not imperfect?
Photography by Allison Johnson / The Verge