TikTok owner ByteDance has an AI play—and U.S. lawmakers won’t like it

 

By Clint Rainey

ByteDance, the Chinese owner of TikTok, is reportedly planning to unveil a new platform this month so it can compete better with OpenAI. According to a memo seen by the South China Morning Post, the unnamed “bot development platform” will give users a tool to create their own generative AI-powered chatbots. It would mark a new foray into the AI revolution by the world’s biggest unicorn, a privately held startup worth at least $1 billion, and would presumably raise a slew of new security concerns from the company’s growing list of U.S. critics.

The Morning Post revealed few other details, beyond quoting a line from the company-wide memo that explained the DIY bot builder fits into ByteDance’s revamped corporate strategy to “explore new generative AI products and how they can integrate with the existing ones.” It stated that a beta version of the platform should debut before year’s end. (After publication, ByteDance told Fast Company that the chatbot platform wasn’t announced in a company-wide memo, as the South China Morning Post had reported. Rather, it was contained in an internal document laying out ByteDance’s business goals. “Besides this, we don’t have much information to share at this moment,” a representative added.)

Broadly speaking, the platform seems to compete with the ChatGPT tools that OpenAI revealed in November at its first developer day, which offer “a new way for anyone to create a tailored version of ChatGPT,” to help do things like “learn the rules to any board game, help teach your kids math, or design stickers”—no previous coding experience required. OpenAI also announced the GPT store as a place for verified builders to share their creations. Originally, the company said the store would launch this year, but last week noted that due to “unexpected things . . . keeping us busy” (meaning, of course, Sam Altman’s sudden firing and rehiring), release was now delayed until early 2024.

Meanwhile, ByteDance has also kept busy, launching its own rival chatbot, Doubao, this summer. The Morning Post said it learned ByteDance is also at work on an alternative to popular AI image generators like Midjourney, Stable Diffusion, and DALL-E. Having an image generator built natively into an app like TikTok, which has more than 3.5 billion downloads, would certainly give critics a brand-new reason for pause since it could equip an even larger pool of bad actors with a convenient way to flood the platform with deepfakes.

It’s unclear in which markets ByteDance’s chatbot builder is being released, and whether that list includes the U.S. or China. Chinese regulators impose strict rules on the development of AI, erecting a considerable barrier there (and helping to explain why OpenAI’s services aren’t available in China, either). In the U.S., lawmakers are increasingly spoiling for a fight over TikTok, which, besides its China ties, also collects more user information than the other social networks.

However, attempts to curb TikTok’s reach over these concerns have been pretty toothless so far. In March, FBI Director Christopher Wray said TikTok “screams” of national security concerns. The U.S. intelligence community has warned that Beijing could invoke China’s 2017 National Intelligence Law to force ByteDance to disclose the private data of American users, or simply demand the company use the algorithm to promote pro-China disinformation. In April, Montana passed the country’s first outright ban on the app—but a federal judge blocked it last week on grounds that it infringed on users’ free speech. (Being Chinese and based in the Cayman Islands, ByteDance can’t assert a constitutional right to free speech. But the app’s 150 million American users can, and several in Montana did.)

The ruling followed a September Forbes report revealing that nearly all TikTok and ByteDance employees have had access to—and therefore the ability to rummage through—contacts for public figures and celebrities, stretching from Beyoncé to President Biden’s family members. That came a few months after a ByteDance employee claimed in court that Chinese Communist Party members had used a special “god credential” to view the private data of Hong Kong’s civil-rights protesters. According to the court filing, the government used the data to identify who these activists were so that it could track them down.

 

Generative AI poses separate privacy concerns. To understand those, just consider OpenAI’s own legal problems: It’s faced two different class-action lawsuits already by Americans who claim the company’s use of web scraping to train its AI models violated their privacy on “an unprecedented scale.” The comedian Sarah Silverman and several other authors are among a group alleging that OpenAI broke copyright law by feeding ChatGPT their work without permission. The Federal Trade Commission (FTC) has also joined the privacy-safeguards fray, announcing an investigation over the summer into whether the company has harmed consumers by collecting data without their permission and also publishing factually incorrect information. OpenAI responded by saying website operators could now block its GPTBot web crawler from scraping their data—an update that a lot of media companies in particular quickly scrambled to implement.

Whether a Chinese-owned competitor would follow OpenAI’s lead and what external pressure its own government might apply in the meantime are unclear at present.

Fast Company

(16)