"Generative AI holds great promise and bad actors know it, so we should all be very vigilant to stay safe," Rosen said.Īt the same time, Meta teams are working on ways to use generative AI to defend against hackers and deceitful online influence campaigns. Meta has yet to see generative AI used as more than bait by hackers, but is bracing for the inevitability that it will be used as a weapon, Rosen said. Meta has found and blocked more than a thousand web addresses that are touted as promising ChatGPT-like tools but are actually traps set by hackers, according to the tech firm's security team. "From a bad actor's perspective, ChatGPT is the new crypto." "We've seen this across other topics that are popular, such as crypto scams fueled by the immense interest in digital currency," Rosen said. In general, it is common for hackers to bait their traps with attention-grabbing developments, tricking people into clicking on booby-trapped web links or downloading programs that steal data. Meta has seen "threat actors" hawk internet browser extensions that promise generative AI capabilities but contain malicious software designed to infect devices, according to Rosen. Meta, the parent company of Facebook, Instagram and WhatsApp, often shares what it learns with industry peers and others in the cyber defense community. "The latest wave of malware campaigns have taken notice of generative AI technology that's been capturing people's imagination and everyone's excitement," Rosen said. Over the course of the past month, security analysts with the social-media giant have found malicious software posing as ChatGPT or similar AI tools, chief information security officer Guy Rosen said in a briefing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |