Europe—worried about AI risks like deepfakes and hallucinations—still has questions for Big Tech companies

Europe—worried about AI risks like deepfakes and hallucinations—still has questions for Big Tech companies

Microsoft’s Bing, Instagram, Snapchat, YouTube, X and others are being probed in Europe over the risk of their generative-AI tech.

BY Associated Press

European Union regulators on Thursday ratcheted up scrutiny of Big Tech companies, including Google, Facebook, and TikTok, with requests for information on how they’re dealing with risks from generative artificial intelligence, such as the viral spread of deepfakes.

The European Commission, the EU’s executive branch, has sent questionnaires about the ways that eight platforms and search engines—including Microsoft’s Bing, Instagram, Snapchat, YouTube, and X, formerly Twitter—are curbing the risks of generative AI.

The 27-nation bloc is flexing new regulatory powers acquired under the Digital Services Act, a sweeping set of regulations that took effect last year with the aim of cleaning up big online platforms and keeping users safe, under threat of hefty fines.

The EU is wielding the DSA and other existing regulations to govern AI until its groundbreaking rulebook for the technology takes effect. Lawmakers approved the AI Act, the world’s first comprehensive AI rules, but the provisions covering generative AI won’t kick in until next year.

Other AI-related risks that the commission is worried about include systems coming up with false information—known as “hallucinations—and the automated manipulation of services to mislead voters.

The commission said its requests for information are about both the creation and spread of generative-AI content. For example, it’s seeking internal documents on how companies have reviewed the risks and worked to mitigate them as they deal with generative-AI’s impact on everything from electoral processes and the spread of illegal content to gender-based violence and the protection of minors.

European authorities are probing tech platforms’ readiness for AI-fueled misinformation and disinformation as they prepare for EU-wide elections set for early June. Commission officials said they want to know whether big online platforms are ready in case a “high-impact” deepfake appears at the last minute and spreads widely.

The EU wants answers from companies on their election protections by April 5 and on the other topics by April 26. The commission could follow up with a more in-depth investigation, but it’s not guaranteed.

Chinese e-commerce platform AliExpress also faces DSA scrutiny. The commission said it opened formal proceedings to determine whether the company failed to protect consumers by allowing the sale of risky products such as fake medicine and children in particular by allowing access to porn. A lack of measures to stop influencers peddling illegal or harmful products also is being examined, it said.

AliExpress said in a statement that it respects all rules and regulations in the markets where it operates.

The company said it has been “working with, and will continue to work with, the relevant authorities on making sure we comply with applicable standards and will continue to ensure that we will be able to meet the requirements of the DSA.”

Separately, the commission asked LinkedIn for information on whether it’s complying with the DSA’s ban on targeting ads to people based on sensitive types of personal data such as sexual orientiation, race, and political opinions.

—Kelvin Chan, Associated Press business writer


Fast Company – technology

(14)