This MIT Scientist’s Donald Trump Bot Needs A Little Human Assistance

For anyone who just can’t get enough of Donald Trump’s social media posts, a Massachusetts Institute of Technology researcher has created a bot that generates tweets in the candidate’s style.

Bradley Hayes, the program’s creator, says he came up with the idea for the bot—called DeepDrumpf, after Trump’s ancestral surname (which was made famous by a viral segment from Last Week with John Oliver)—while at lunch with a coworker.

“We were kind of joking about incendiary and controversial things that Trump had been saying,” says Hayes, a postdoc in MIT’s Interactive Robotics Group. “We started talking, and thought, we probably could try to model that.”

Hayes had previously read an article by Stanford researcher Andrej Karpathy about using computational tools called recurrent neural networks to imitate the styles of writers ranging from William Shakespeare to Y Combinator founder Paul Graham, and decided to use the same technique to build his Trump simulator.

The neural networks actually generate text character-by-character, based on what they’ve read before and emitted in a particular session, Hayes says. But they’re statistically powerful enough to learn basic grammar rules and even to generate opening and closing quotes in pairs, he says.

“It learns all of the grammatical structure just from the data,” he says. “The only thing I had to do was feed it all the raw texts from all of Trump’s speeches.”

Hayes says he initially planned to have the program automatically tweet a Trump-style message every few hours, but he quickly realized the bot’s personality was just too volatile. During the bot’s first week, he fed it text from a Hillary Clinton tweet about President Obama’s employment policies. The response was so violent he worried he’d get a call from the Secret Service if the post made it to Twitter.

“The bot had proposed tweeting back to Hillary Clinton and the @POTUS account something like, ‘You’re only creating jobs for ISIS—I’ll send terrorists after you,’” he recalls. “That was bad.”

It’s a similar lesson as the one recently learned by developers at Microsoft, who infamously saw their Twitter chatbot called Tay transform from a bubbly teenager to an angry racist after scooping up speech patterns from online trolls.

That’s why Hayes’s bot only tweets through Hayes, its human “campaign manager. He says he usually has it generate a block of about 1,000 characters, runs them through a few automated filters like a spellchecker—which fixes those words the bot hasn’t quite mastered—and grabs one of the most entertaining subparts to tweet.

Just as the bot doesn’t always spell correctly without help, it doesn’t always string words together coherently, so not every (nonviolent) substring would be a great tweet.

Other Twitter users—from Trump voters who enjoy seeing references to their candidate to Democrats who see it as a worthy parody—generally interact positively with the bot, says Hayes. Those who don’t are usually trying to test the bot’s limits, or see how it reacts to direct messages and directed tweets. That’s something that Hayes says is important for bot designers to keep in mind, lest they assume that people will interact with machines the same way they treat humans on the same platform.

“It’s fairly naive to assume that people will just treat these like people,” he says.

Hayes hasn’t limited his bot-making to the Republican side of the aisle: He has also created a similar Bernie Sanders bot, called DeepLearnTheBern, though hasn’t put as much effort into that one as the more-incendiary Trump bot.

“The reason I haven’t been active on that one is, one, it takes time to curate all the transcripts,” he says. “And, two, the things that have been coming out of it, for the most part, have kind of just been reasonable, which is not super funny.”

Hayes says he’s not sure the bots will directly make it into his academic work, since the actual computational techniques involved aren’t that novel, though he thinks they’re still interesting as a sociological study of human-computer interaction. Still, the same kinds of neural networks that predict the next character Donald Trump would emit in a tweet can be used for other purposes, like predicting how a robot arm should move, he says.

As to whether one of these politicians might want to deploy a bot of his own, Hayes says it might be too risky, since it’s hard to keep the bots on message.

“If you were a candidate, you would want your candidate to propagate your own beliefs,” he says. “One of the dangers, and the reason why my [Trump] bot isn’t fully autonomous, is it’s too unpredictable in what it can generate.

Fast Company , Read Full Story

(29)