These clever tools turn AI prompting into a physical act

 

By Elissaveta M. Brandon

There’s something unsettlingly mechanical about designing with AI. You type a prompt into an interface, it spits out an image; you type in another prompt, it spits out another image. The process involves a screen, a keyboard, and absolutely zero connection to the outside world. But what if you could prompt AI by manipulating real objects instead?

These clever tools turn AI prompting into a physical act | DeviceDaily.com
[Photo: courtesy Zhaodi Feng]

That’s the premise behind Promptac, a design kit that lets designers communicate their ideas not just with their words, but with their hands, too. The kit is made up of six objects that can fit in the palm of your hand and serve as tactile inputs for manipulating an AI model.

[Image: courtesy Zhaodi Feng]

Each object hides a different kind of sensor. One is shaped like a thimble that works a little like Photoshop’s eyedropper tool: tap a finger over a color or material, and it will apply it to an AI-generated model. Others act like little playthings that let you twist, pinch, and bend them in a way that will influence the shape of the AI model (twist an object, and a vase will turn out twisted; bend it, and the vase will bend).

Promptac (an umbrella term for “prompt and tactile”) was designed by Zhaodi Feng, who just presented her project at the Royal College of Art Graduate Show in London. For now, it’s only a prototype, but it’s an ingenious idea that could help creative professionals collaborate with AI in a way that is more tactile, intuitive, and pleasant than the current mode of typing into a text box.

Feng believes the technology could help clients better communicate their ideas to their designers. “AI generation is very quick in concept visualization, so [clients] can use this kit to visualize their ideas through AI, and designers can help them make it or design it in better way,” says Feng.

Creating a design with Promptac begins as it typically would: Type a prompt into the program of your choice and wait for the AI to generate an image. From there, you can forget about the screen altogether. Hunt for colors and materials around you. That tree bark texture? Your RFID (radio frequency ID) sensor will pick it up. The vibrant green of your desk plant? The color sensor will detect it.

[Image: courtesy Zhaodi Feng]

You can also play around with what Feng calls the “hand manipulation sensors”: the bending tool looks a bit like a rainbow-shaped salmon nigiri and can bend up to 180 degrees; the press tool resembles a round almond cookie and can register different kinds of pressure. Feng created these shapes after observing craftspeople manipulating clay with their hands, then singling out their most frequent hand movements—pinching, pressing, twisting—into various objects.

 

For now, the interaction between the sensors and the on-screen model doesn’t happen in real time, so the immediacy is lacking, but Feng believes she could eventually get it to between 15 and 30 seconds with more advanced AI models like DragGAN, an AI app that lets users load an image and manipulate its contents. Another challenge Feng will need to overcome is making the sensors wireless (they’re currently wired to an Arduino board).

Next up, she’s planning to collaborate with AI researchers, as well as creatives from different sectors, who can inform the future of her project. If architects want to use this tool, for example, could she make similar sensors that are shaped like Legos?

Fast Company

(13)