Meta security analysts warn of malicious ChatGPT cheaters

Photo of author

By Webdesk

The Meta security team acknowledges that there is a lot of fake ChatGPT malware that exists to hack user accounts and take over company pages.

In the company’s new Q1 security report, Meta shares that malware operators and spammers are tracking trends and topics that are garnering a lot of attention and grabbing people’s attention. Of course, the biggest tech trend right now is AI chatbots like ChatGPT, Bing, and Bard, so it’s now fashionable to trick users into trying a fake version – sorry, crypto.

Meta-security analysts have found about 10 forms of malware masquerading as AI chatbot-related tools such as ChatGPT since March. Some of these exist as web browser extensions and toolbars (classic), and are even available through unnamed official web stores. The Washington Post reported last month on how this fake ChatGPT scam has been using Facebook ads as another way to spread.

Some of these malicious ChatGPT tools even have built-in AI to look like a legitimate chatbot. Meta then blocked more than 1,000 unique links to the discovered malware iterations shared across its platforms. The company has also provided the technical background on how scammers gain access to accounts, including hijacking logged in sessions and retaining access – a method similar to what brought down Linus Tech Tips.

For any business that has been hacked or shut down on Facebook, Meta provides a new support flow to fix them and regain access to them. Company pages generally succumb to hacking because individual Facebook users with access to these pages become targets for malware.

Now Meta is rolling out new Meta work accounts that support existing, and mostly more secure, single sign-on (SSO) credential services from organizations that aren’t associated with a personal Facebook account at all. Once a business account is migrated, the hope is that it will be much harder for malware like the bizarre ChatGPT to attack.

Source link

Share via
Copy link