DALL-E can now use AI to extend images as a human artist might

By Mark Sullivan

August 31, 2022

Since it was announced in April, the text-to-image AI tool DALL-E 2 has been wowing artists, researchers, and media types its high-quality images. Now, four months later, developer OpenAI is giving DALL-E 2 a new trick: the ability to extend the original images it creates beyond their original borders in logical and creative ways.

The new feature, which OpenAI calls “outpainting,” could be useful to graphic designers who need to create multiple sizes and shapes of a particular image to present in different contexts. A movie promo image, for instance, might require a perfectly square shape in one context, and a tall rectangular shape in another. For the latter, new art is required to fill in the extra space.

DALL-E can now use AI to extend images as a human artist might | DeviceDaily.com

The artist Paul Trillo used outpaintinig to extend this image of a UFO downward to include the pool. Click to expand

[Image: courtesy of OpenAI]

DALL-E 2 creates original 1024 X 1024-pixel images based on a keyword descriptions entered by the user. It can also make images based on objects and styles it sees in other images. For example, it might be given a street art image of a mouse alongside an art deco version, then combine elements of the two styles into an original picture of the rodent. It also has editing capabilities, meaning a user can erase a section of a generated image and then tell DALL-E to add a specific object or style in that area. For instance, if designer doesn’t like the expressionist red roses in the foreground of an image, they can erase them and ask DALL-E to put photorealistic white orchids there instead.

Now, the editing interface is getting some new buttons to control the expansion of images. In a demo Tuesday, I watched OpenAI engineer David Schnurr extend an image DALL-E had created earlier based on the keywords “two teddy bears mixing sparkling chemicals inside of a laboratory.” I saw a kind of steampunk-style image of two cute teddy bears wearing goggles standing at a lab table in the foreground. Schnurr wanted to extend the image to show more area above the teddy bears. So, he positioned the bottom half of a blue square over the top left section of image, which told the AI to use the storybook laboratory context and vibe in the lower half of the square as the basis for the extension of the image into the top half of the square.

“We’re adding more sort of laboratory concepts into the image, and then we can also expand upwards and really just make an image that’s as big as we would like,” Schnurr says. 

Say Schnurr had wanted DALL-E to include something specific in the extended area of the image, like a Cuckoo clock hanging on the wall above the bears. He could have done that by giving DALL-E some additional keywords.

Actually, Schnurr tells me, DALL-E creates four different versions of the extended area, from which the user can choose. If they don’t like any of the four they can try the extension function again, perhaps with different keywords.

DALL-E product manager Joanne Jang says the new feature was driven directly by feedback from DALL-E users. Filmmakers are using DALL-E to cut storyboarding time in half, Jang says. They might want to experiment with closer or wider shots during the creative process. Game designers have been using DALL-E to reduce the time it normally takes to create new scenes and actions with concept artists.

The outpainting feature isn’t a free add-on. Every DALL-E beta user gets 50 free credits during their first month of use, and 15 free credits every subsequent month. Every time a user generates an additional section of an image it costs them a credit. Users can purchase additional credits in 115-generation packs for $15, OpenAI says.

Jang says more than a million users have been invited into the DALL-E beta program, including more than 3,000 working artists. As a result, OpenAI has been fielding a lot of different kinds of feedback on how to improve the DALL-E’s tools.

But one ask seemed to cut across user types, Jang adds: “I think amongst all those feedback points, one thing that was pretty commonly requested was a flexibility in aspect ratios,” she says.

(74)