Ever since the release of ChatGPT last year, new generative AI tools and services have captured people’s attention. Now, Meta is warning that bad actors have taken notice of interest in AI chatbots. The Facebook parent said scammers are creating malware that poses at ChatGPT and similar tools.
In a security report released Wednesday, Meta said it discovered 10 malware families posing as ChatGPT or related tools since March. Some of the malicious software, which can steal your personal information and compromise accounts, came in the form of browser extensions and links. Meta said it removed more than 1,000 malware links from its apps.
“The generative AI space is rapidly evolving and bad actors know it,” Guy Rosen, Meta’s chief information security officer, said in a statement. “As an industry, we’ve seen this across other topics popular in their time, such as crypto scams fueled by the interest in digital currency.”
Rosen compared the usage of the interest in AI chatbots for malicious purposes to crypto scams that flourished on social media just a few years ago.
Generative AI like ChatGPT and Google Bard exploded in popularity this year. Users can not only pose questions to these chatbots, but they can also ask for a poem, write a cover letter or even write music. More tech companies are looking to incorporate AI with their service or develop their own. This growing interest similar to the crypto craze years ago makes those interested in AI a prime target for scammers.
Rosen also noted that scammers are hiding across multiple services in order to avoid detection. This includes multiple social media platforms, different internet browsers and file-hosting services. If the scammers get caught on one platform, they can make slight tweaks in order to continue spreading malware on another platform.
Meta said it’ll continue to roll out new protections against these malware campaigns and work with other companies to stay on top of threats.