I tried Photoshop’s new generative AI tools. They’re not going to steal your design job—yet

 

By Jesus Diaz

When I heard Adobe was going to add its generative AI Firefly engine right into Photoshop—touting it as a “major new release” of the venerable pixel toolbox, no less—I really got excited. So excited, in fact, that my working title for this article was New Photoshop is the biggest version since 1991. The idea of having professional photo-editing software, which could allow me to freely manipulate and create synthetic images to help with my regular image workflow, sounded like a dream come true. The concept has the potential to shake the entire industry.

Then I tried it and, well, my hopes turned to shattered dreams, as Stevie Wonder once sang. This ain’t a kind of magic, my friends—at least not yet.

Earlier this week, Adobe launched Photoshop beta 24.6, which includes the beta version of the company’s new AI tool, called “Generative Fill.” All current Creative Cloud subscribers have access to the Generative Fill feature ahead of its wider launch later this year, which allows them to use AI to do things like expand the border of an image, automatically fill in blank space, and generate images via prompts like you might with something like Midjourney or DALL-E.

The tool brings some convenient tricks to image editing, but it has a few notable problems, too. The update feels like an early attempt to integrate a work-in-progress AI into a professional photo-editing pipeline—one that can’t yet compete at the level of the other, more advanced image-generation software that currently exist.

It’s really too bad. Having been a raster junkie since the Amiga days of Deluxe Paint II and later the truly revolutionary Photoshop 1.0 for Mac, I was looking forward to seeing a new magical program that would allow me to manipulate images like never before. I was especially excited to work in tandem with the other AI tools I’ve been using and was hopeful that I could replace them and use Photoshop as my sole generative AI stop.

[Image: Adobe]

Don’t get me wrong: Adobe’s Firefly generative AI is quite convenient for a very limited set of tasks. While, for years now, Adobe’s star program has had some serviceable AI tools, like the acceptable neural image resizing, and some useless AI tools (like facial manipulation), this is the first time it has included powerful image generation capabilities that could actually help professionals hand off some of the menial tasks of their daily design business.

The good, the bad, and the very ugly

The Photoshop beta is mostly the same as the older version, aside from a new contextual bar with a button that says Generative Fill. This bar appears anytime you make a selection on the screen. Clicking the button will display a field that asks you to enter what you want Photoshop to imagine or leave it blank if you want Photoshop to interpret what to create based on the context provided in the image you are editing.

This ability is what other AI programs like DALL-E call “inpainting” and “outpainting.” When you lasso a piece of an existing image and type “a Godzilla monster” into the context bar, Photoshop will automatically inpaint a giant lizard beast that matches the background photo coloring and lighting using Adobe’s cloud-based Firefly AI engine.

Likewise, when you select, say, an unwanted person in the image of a landscape but don’t type any prompt after clicking Generative Fill, Photoshop will eliminate that person, inpainting it with more AI-generated background that’s meant to seamlessly erase the object. If you want to expand the boundaries of a landscape, you can select one of the resulting blank parts of the canvas, click Generative Fill, and then you will be outpainting the landscape with more landscape full of details invented by the AI.

Unlike Photoshop’s previous content-aware fills, which would randomly repeat patterns and require further manual patching work, the Firefly-enabled beta excels at outpainting landscapes and eliminating objects. The new generative AI-based is so good that it will save a lot of time for many people doing these tedious tasks.

Where Photoshop’s generative AI falls short is in everything else it promises to do. Simply put: At this point, the quality of the new object generation is just not good enough for professional-level work. The software seems to suffer from the same lack of realism in older versions of Stable Diffusion or the current DALL-E. There’s poor representation of anatomical details that result in deformed hands; other objects are deformed to the point of being unusable. And, as I discovered, Firefly often misinterpreted my prompts.

To test the feature’s capabilities, I ran a very simple experiment: I took a photo of my son and me with the intent of doctoring it in ways that are often problematic for generative AI. The AI was able to neatly eliminate the board game from my image, but my kid’s hand [whose eyes I pixelated on purpose] lost part of its fingers when I asked PS to “swap the apple for a toy dinosaur.” I wanted to add “Converse-like sneakers” to his foot, but the shoes didn’t look real or have the right perspective. Likewise, his glasses were neither “in the style of Ray Ban Wayfarers,” as I prompted, nor did they sit right on his face. My hand got deformed when I tried to get a “hand in a leather glove,” and that’s definitely not the “cowboy hat” that I asked the prompt for so many times that I ultimately gave up.

I tried Photoshop’s new generative AI tools. They’re not going to steal your design job—yet | DeviceDaily.com
[Screenshot: courtesy of the author]

Photoshop’s Firefly subpar performance on prompts like these is crystal clear when you compare it to other AI tools like Flying Dog, an image editor based on Stable Diffusion:

 

Clearly, while Photoshop’s new AI is quite good—and convenient!—for outpainting landscapes or eliminating objects, it is really not production ready for everything else. Maybe by the end of the beta period, Adobe will have improved these issues, but for now, it just can’t compete with the realism of apps like Midjourney or Stable Diffusion XL. That technology is far ahead of anything else out there right now. They have mostly solved things like coherence—so you get accurate hands, faces, and objects in general—and greatly increased realism in every way. This is where Adobe’s tech lags behind.

I found there were other usability problems that make Photoshop’s AI less useful than it could be. Whenever you create something with AI, a new “Generative AI” layer is created. This is good because it doesn’t destroy the original image; instead, it composes multiple layers on top. However, if you try to move any of these layers around, they come with a bit of a background halo. As you can see in the image below, this makes it impossible to reposition my Godzilla-like monster.

I tried Photoshop’s new generative AI tools. They’re not going to steal your design job—yet | DeviceDaily.com
[Screenshot: courtesy of the author]

You could have imagined that, as you moved your AI-made object around, it would regenerate automatically to merge with the layers below. But no, it just keeps that halo background, making the generated image unusable anywhere but in the original spot. To reuse the Godzilla—as you can see in the Godzilla on the center of the road—you have to extract the object and eliminate the background to reposition it. But then, it may look unrealistic because the lighting and color will not match its new location in the original photo.

The cost of cloud-based generative AI

Unlike Stable Diffusion—which you can download and install in your computer to run locally for free—generative AIs like DALL-E and Midjourney require you to pay for tokens or subscriptions to use their functions, thanks to powerful processors in the cloud. Firefly features similarly don’t run locally on a computer but on Adobe’s cloud. 

While Photoshop’s AI image generation will be free for the time being, an Adobe spokesperson told me via email that there’s no details about future costs: “Generative Fill is only available in the Photoshop beta app currently for all PS users at no extra cost and will be available for all second half of 2023. We will share additional details closer to when this is out of beta.” In other words: There’s no guarantee that it will be available for free when it makes it to the final version of Photoshop. In fact, given the huge processing cost, it wouldn’t be crazy for there to be an extra cost associated with it (though I hope I’m mistaken).

Running on the cloud also has other important drawbacks. Since anything you make with Photoshop’s Firefly capability will be sent to Adobe’s computers, this opens users to potential privacy issues. It also exposes your professional work to arbitrary censorship, which is strikingly aggressive in this beta: Our art director tried the prompt “a diagram of a human breast, anatomical, cancer self-exam” which automatically resulted in a prohibited content flag.

I tried Photoshop’s new generative AI tools. They’re not going to steal your design job—yet | DeviceDaily.com
[Screenshot: FC]

According to Adobe, this helps “ensure images are used responsibly, any images contributed to Adobe Stock or any Firefly-generated images are pre-processed for content that violates our terms of service; violating content is removed from our prompts, or the prompt is blocked altogether.” This includes anything from nudes to third-party brands or copyrighted cartoon characters. For many professional users, however, this censoring will affect the usability of Firefly’s AI capabilities when it comes to work that is perfectly ethical despite not complying with Adobe’s definition of what is proper or not. 

On the positive side, Adobe’s ethical approach works well to protect the creative community that fuels it. The Firefly’s AI model was fed a strict diet of Adobe Stock photos and public domain images, according to Adobe. However, the Adobe spokesperson says they still haven’t announced a plan to compensate the authors of these images used to train Firefly’s AI model: “We plan to share details on how we plan to compensate creators for this work [when this gets out of beta].”

This version of Photoshop does implement an important feature called Content Credentials—“a free, open-source tool that serves as a digital ‘nutrition label’ for content through the Adobe-led Content Authenticity Initiative (CAI).” Designed to increase trust in online content, these credentials will be added to any image manipulated with Firefly’s generative AI.

I tried Photoshop’s new generative AI tools. They’re not going to steal your design job—yet | DeviceDaily.com
[Image: Adobe]

An uncertain future

Perhaps my disappointment with Firefly and Photoshop is one of mismanaged expectations. But the fact is, while the initial articles about it have been fawning, Firefly can’t stand the test of actual professional use beyond outpainting and object removal. These tools are helpful, but they don’t fulfill the potential seen in other new tools like the experimental DragGAN created by researchers at the Max Planck Institute, which is capable of changing any photo with a level of accuracy and realism that is as astonishing as it is believable. 

Maybe one day, Adobe will hit the AI ball out of the park. Then I believe, I may fall in love with the program forever, like the good old days of the ’90s. In the meantime, the company should perhaps speed up the adoption of new technologies, be truly bold, or risk being destroyed by the work of researchers and hungry startups.

Fast Company

(6)