Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

admin
Pinned September 23, 2023

<> Embed

@  Email

Report

Uploaded by user
An Iowa school district is using AI to ban books
<> Embed @  Email Report

An Iowa school district is using AI to ban books

ChatGPT scams are the new crypto scams, Meta warns

 

Karissa Bell
Karissa Bell
 

As the buzz around ChatGPT and other generative AI increases, so has scammers’ interest in the tech. In a new report published by Meta, the company says it’s seen a sharp uptick in malware disguised as ChatGPT and similar AI software.

In a statement, the company said that since March of 2023 alone, its researchers have discovered “ten malware families using ChatGPT and other similar themes to compromise accounts across the internet” and that it’s blocked more than 1,000 malicious links from its platform. According to Meta, the scams often involve mobile apps or browser extensions posing as ChatGPT tools. And while in some cases the tools do offer some ChatGPT functionality, their real purpose is to steal their users’ account credentials.

In a call with reporters, Meta Chief Security Officer Guy Rosen said the scammers behind these exploits are taking advantage of the surge in interest in Generative AI. “As an industry we’ve seen this across other topics that are popular in their time such as crypto scams fueled by the immense interest in digital currency,” Rosen said. “So from a bad actor’s perspective, ChatGPT is the new crypto.”

Meta noted that people who manage businesses on Facebook or who otherwise use the platform for work have been particular targets. Scammers will often go after users’ personal accounts in order to gain access to a connected business page or advertising account, which are more likely to have a linked credit card.

To combat this, Meta said it plans to introduce a new type of account for businesses called “Meta Work” accounts. These accounts will enable users to access Facebook’s Business Manager tools without a personal Facebook account. “This will help keep business accounts more secure in cases when attackers begin with a personal account compromise,” the company said in a statement. Meta said it will start a “limited” test of the new work accounts this year and will expand it “over time.”

Additionally, Meta is rolling out a new tool that will help businesses detect and remove malware. The tool “guides people step-by-step through how to identify and remove malware, including using third-party antivirus tools” to help prevent businesses from repeatedly losing access to accounts.

Meta’s researchers aren’t the first to warn about fake ChatGPT tools leading to hacked accounts. Recently, researchers warned about a Chrome extension posing as ChatGPT software that led to the hacking of a number of Facebook accounts. The exploit, reported by Bleeping Computer, became known as the “Lily Collins” hack because the names on victims’ accounts were changed to “Lilly Collins.”

During a call with reporters, Meta’s Head of Security Policy, Nathaniel Gliecher, said these attacks also often target people connected to businesses. “What they’ll want to do is to close that personal account to burn their access and prevent the legitimate user from getting back in,” he said. “One of the tactics we’re now seeing is where they will take the personal account and rename it to have the name of a prominent celebrity in hopes that that gets the account taken down.” He added that the new Work Accounts would help prevent similar hacks in the future.

 

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics  

(20)