Apple vs Adobe: One is Clearly Better at AI Photo Clean Up

5 hours ago 2

 "Clean Up," with a colorful logo on the left, and "Adobe Remove," featuring the Adobe logo on the right. Both are set against a soft, cloud-like background with "VS" in the center.

Apple Photos added a new Clean Up feature that removes unwanted objects from photos, a tool that competes directly against the one that’s found inside Adobe products like ACR and Lightroom. So, which one is better?

This week, Apple pushed an update to macOS Sequoia (version 15.1) that adds the first batch of Apple Intelligence features that include the Clean Up tool in Photos, a generative AI feature that removes unwanted objects from images. Some of you might not have realized this tool was coming to desktop, as Apple typically showcased it in use on the iPhone. But for those of you who prefer editing images from a desk, Clean Up lives on macOS, too.

Outside of some flashy animations, Clean Up works pretty much identically to Adobe Generative Remove in practice, except for the fact it will offer suggestions sometimes on objects in photos it detects and thinks you might want to remove. Otherwise, it uses the same painting method that Adobe Lightroom and Photoshop users have come to know. Since Adobe just updated Photoshop and its Firefly AI model, we figured now was a great time to see how these two widely available removal tools fare against each other. So, we tasked both with removing the same elements of six different photos to see which performed best.

For each below, the beginning image is linked in higher resolution, and higher resolution versions of the two outputs from Apple and Adobe are also linked in the text above the comparison slider. We encourage you to look closely at them all and form your own opinions, but I also provide my insight on each result.

Photo #1: Power Lines Against a Setting Sun

Silhouetted birds perched on a power line against a colorful sunset sky between two darkened buildings. Trees and rooftops sit in the distance.

For this image, I asked both platforms to remove the power lines that bisect the middle of this image. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right.





Winner: Apple Clean Up

This was the first photo of the test batch and I immediately started to question what is going on at Adobe. For it to produce this result is, frankly, unacceptable. This was probably the easiest of the photo editing tasks I provided and it failed — miserably. The remove tool added a ton of pixelated noise, telling me that the AI behind Generative Remove is not comfortable or even familiar with texture.

Apple’s result is perfect. No notes.

Photo #2: Waiting for the Train

A nearly empty train platform under a modern, white steel structure. Two people stand near the edge, facing the tracks. In the background, urban buildings are visible against a cloudy sky. Several benches are unoccupied.

In this photo, I asked both Apple Photos and Adobe Photoshop to remove the two people standing by the edge of the platform. Arguably, this is one of the more common applications of an AI-assisted remove tool. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right.





Winner: Apple Clean Up

This, the second image I processed, is where I started to see a trend form. Whatever Adobe did to its latest Firefly model, it desperately does not want to remove objects. Instead, it seems to want to always fill the blank space with something. In this case, it managed to skillfully remove one person but replaced the other with a post. I’m not going to sit here and say that Apple’s job was perfect: it’s not. The AI has some problems with that busy background and the areas where it had to fill in ended up looking like visual clutter. But if I have to pick between visual clutter and a nonexistent, nonsensical post, I’m going with the former.

Photo #3: Tokyo Street

People walk along a rainy street in a shopping area. Two individuals in traditional clothing hold an umbrella. Various shops and signs line the street, with some pedestrians carrying umbrellas.

In this photo, I wanted to remove the people on the left side of the photo so that the focus would be more on the two girls in kimono. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right.





Winner: Apple Clean Up

This is one of the more comical results from Adobe, as I can understand where the AI is coming from but it is absolutely not correct. It’s also another example of the software leaning hard into the, “I need to add something,” point of view. On the flipside, Apple’s Clean Up does a fairly good job at replicating a reflection on the wet street, which looks pretty similar to the reflection a post in the background is casting (if not a bit too strong). The street, however, is warped and the pixels are smudged, so it’s not a perfect fix. But if I were asked to pick one of these results, the answer is obvious: Apple’s is better.

Photo #4: River Boat

A small boat travels along a canal bordered by modern buildings and a brick wall on one side, reflecting in the water. The cityscape and overcast sky create an urban atmosphere.

I was curious how both Clean Up and Generative Remove would handle a main subject, so I selected this photo of a boat on a river and asked both platforms to remove the boat. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right.





Winner: Tie

Adobe finally produced a good result after three successive abject failures. The main difference between the two outputs comes down to the replaced pixels. Adobe’s result is a smudgy, blobby mess that is less noticeable on water than it would be on a more textured background but it’s not clean and sharp. Apple’s result is much sharper, but it replicates pixels in an odd, unnatural way. Clean Up also specifically recognized the boat, which is why the area it fills with generated pixels is smaller than the one Adobe produced even though I highlighted the same space on each platform. I’m calling this a tie.

Photo #5: Escalator

A person wearing a mask stands on an escalator at a train station, holding a bag and phone. The station features bright lighting, metallic and white walls, and safety markings on the floor.

I think this is one of the more challenging tasks: I asked Clean Up and Generative Remove to delete the person riding this escalator. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right.





Winner: Apple Clean Up

Apple is once again the clear winner here as it is able to accurately and sharply replicate the background believably. There is still a dark shadow where the person used to be standing, but at least the steps on the escalator are clean, straight, and sharp. Adobe’s result, on the other hand, is riddled with errors. There are floating chunks on both the steps and just below the railing, making it unusable as-is. It would take several more minutes of careful clone-stamping to get the Adobe generation to the level of completeness Apple achieved on its own.

Photo #5: Auditorium

A person sits alone on yellow benches in a spacious, circular auditorium. The seating is arranged in curved rows with light-colored flooring and potted plants lining the perimeter.

To this point, I’ve given Adobe access to RAW files so that I can use Generative Remove in Photoshop ACR. But what if no RAW file is available? The next best option is Generative Fill, and that’s what I used in this final comparison. The result from Adobe Generative Fill is on the left while the result from Apple Cleanup is on the right.





Winner: Apple Clean Up

I want to loop back to when I said that Generative Remove is very uncomfortable with blank space. Generative Fill here absolutely would not give me any result that left that seat blank. The first result it suggested is this two-headed man, but there were two others:

Three images show people seated on a wooden bench. The first depicts two people close together, the second shows a person sitting alone, and the third features a person in a full-body costume. Subdued lighting and a marble floor form the setting.

Just… no. I don’t want to replace the man with another man, and I most certainly don’t want that man to have two heads, and the final example appears to depict a person being sucked into the marble. None of these results are good. As a note, I did do what Adobe instructs you to do when you want Generative Fill to simply fill a selection with surrounding information with the intention of having objects removed: I left the prompt blank. If I did want a person added, I would have said so in the prompt.

On the other side, Apple again does a good job. It’s not perfect, but it at least understood the assignment. Apple Clean Up is the clear winner.

Something is Very Wrong With Adobe’s AI Model

We suspect there is something very wrong with Adobe Firefly right now. We can’t think of a time that Adobe had a feature available in its software for six months and then when a new version of it was released, it is objectively worse. That is exactly what happened with this update.

After speaking with Adobe about some of these results, the company got back to me today saying that it updated its model last night after I reported my problems. I re-tested the above images and, unfortunately, the results were the same. I was really hoping the power lines image would get better, but Generative Remove still produces the pixelated visual mess even this morning.

Something did change though: I re-ran the Generative Fill task and it gave me something different. It’s not good, but it is different:

Aerial view of a circular, open-air amphitheater with yellow seating. The seating is arranged in tiers around a central performance area. Potted plants adorn the upper perimeter, and a lone statue sits among the seats.

Again, the other options it provided were no less terrible:

Three images of ornate, sculpted masks displayed on a wooden ledge, each with unique designs and textures, set against a tiled floor background.

What is happening here is somewhat of a cautionary tale in the story of AI. Adobe does plan to charge Generative Credits to use these tools in the future, going so far as to set up a system where you could give them money for more credits now, even if the company says it’s not actively tracking their usage yet. Even if this problem gets resolved, which I am sure it will, who is to say it won’t happen again the next time Adobe updates its model? These results are not only bad, but they would force me to re-run results multiple times in an attempt to get something usable, which would burn credits. That would feel like getting charged for unacceptable results. That’s not a winning business strategy.

We tested the same photos in both Photoshop with ACR and Lightroom Classic on two different machines and the results were different every time. Sometimes they were usable while other times they ended up worse than the examples I show above. Of note, none of the times we asked Adobe to remove the power line resulted in a clean image. That’s a glaring problem. I opted to show the first result given to me by both Apple Clean Up and Generative Remove, but you may get different results on your end. That speaks to a consistency issue, too, and loops back around to Adobe’s future Generative Credits system.

Adobe declined to specifically say what might have happened with this model, only: “We’re looking into this and always appreciate feedback from the community as we continue to improve the quality of our tools.”

We plan to revisit this comparison after Adobe figures out what is causing Firefly to generate these, frankly, bizarre results.

An Excellent Showing for Apple’s First Attempt

On the other side of the coin, Apple’s first attempt at an AI removal tool is, generally, a success. I wouldn’t say it gets full marks, but it at least does what it promises — it cleans up images by removing unwanted objects. The elevator photo and the power lines are perhaps its best wins with results I would say look very real. Clean Up’s results are also very predictable, giving the same result every time I processed the images. That’s probably because Apple’s system is very focused on removal, which is easier to code and provides a more consistent result compared to Adobe’s which is running on an AI model that also generates images out of thin air.

It’s also worth noting that if you have a Mac that can run the macOS Sequoia, Clean Up is free. Access to Adobe’s Generative Remove and Generative Fill tools is only available to subscribers.

When Apple’s results look this good, calls for Aperture’s return are only going to get louder.


Image credits: Photographs by Jaron Schneider. Elements of header photo licensed via Depositphotos.

Read Entire Article