4 ways that ChatGPT is a clear and present threat to cybersecurity

closeup of hands on laptop keyboard

Organizations that have not yet factored generative AI technologies into their cyber risk matrix might want to do so quickly.

Security concerns related to the use of ChatGPT have exploded since Microsoft-backed OpenAI released the AI chatbot in November 2022. ChatGPT  set a record of sorts for the fastest adoption of a new technology soon after when it clocked 100 million users worldwide in January—or just two months after launch. In February, UBS reported ChatGPT averaging as many as 13 million unique visitors a day.

The blistering pace of adoption has left organizations—including technology giants such as Google and Meta—scrambling to respond. The focus is as much on finding ways to harness the generative AI technology for good as it is on understanding its potentially negative implications.

Here are four ways ChatGPT already is a clear and present danger on the cybersecurity front:

Employees sharing corporate and personal data with ChatGPT

Employees using ChatGPT for various use cases at work are increasingly pasting corporate and sensitive data into the AI-powered chatbot. The trend has prompted concerns about adversaries finding ways to resurface the data later by directing the right questions at ChatGPT and other similar generative artificial intelligence tools.

Why it matters: ChatGPT presents a new data leak vector. Tools are available that can help organizations detect and stop inadvertent and malicious data leaks via email, uploads, cloud storage and via file sharing and collaboration tools. It’s much harder to spot data that employees might be inadvertently leaking via ChatGPT. More

Using ChatGPT to write malware

Just as software developers can use ChatGPT to write code, bad actors can use it to develop malware. Generative artificial intelligence technologies such as ChatGPT give individuals with basic programming and coding skills the ability augment their skills and more quickly develop malware relevant to their objectives.

Why it matters: ChatGPT significantly lowers the bar for malware writing. Though the technology is unlikely to be of much help to absolute coding newbies, it opens the door for relatively low-skilled malware developers to create more sophisticated tools—and iterate on them much faster as well. More

ChatGPT-based malware distribution and phishing scams

Threat actors are taking advantage of the enormous public interest around ChatGPT to distribute malware and carry out various other attacks.  

Why it matters: Threat actors have always taken advantage of trending news and topics of high interest to mass audiences as lures for phishing emails and social engineering scams. ChatGPT is only the latest lure so to that extent the threat is not new. What should be of concern however is the fact that as ChatGPT and other generative AI tools become more sophisticated, threat actors could use them to craft highly effective lures. More

Fake ChatGPT apps and Chrome browser extensions

Attackers are using fake ChatGPT mobile applications and browser extensions to distribute malware and steal data. Security vendors have reported observing these apps on official vendor app stores and being promoted on Facebook and via malicious advertisements.

Why it matters: Users downloading fake apps on their personal or corporate owned mobile devices present a major risk to enterprise security. One example is the data breach at password management firm LastPass, where a threat actor stole the company’s source code after gaining access to the development environment via a DevOps engineer’s personal device. More

Here’s the TL; DR

Employees sharing corporate and personal data with ChatGPT

Employees using ChatGPT for various use cases at work are increasingly pasting corporate and sensitive data into the AI-powered chatbot. The trend has prompted concerns about adversaries finding ways to resurface the data later by directing the right questions at ChatGPT and other similar generative artificial intelligence tools.

The threat is real. Data that Datahaven recently analyzed showed that 5.6% of knowledge workers at organizations that use its data loss prevention product, have tried ChatGPT at work for various productivity-related tasks. Some 4.9% have pasted company data into it including regulated personal data such as customer lists and home addresses; regulated health data such as those pertaining to a medical diagnosis; and company confidential data.

On just March 1 alone, Datahaven’s DLP product detected over 3,380 attempts by workers to paste corporate data into ChatGPT. And between Feb 26 and March 4, workers at Cyberhaven’s customers with more than 100,000 employees, pasted confidential company data into ChatGPT 199 times, source code 159 times and client data 173 times.

As one example, Datahaven pointed to a doctor inputting a patient’s name and details of their health condition into ChatGPT so the chatbot can draft a latter to the patient’s insurance company.  Another example is that of a company executive asking ChatGPT to create a PowerPoint slide deck using confidential information from a company document.  

“In the future, if a third party asks ChatGPT “what medical problem does [patient name] have?” ChatGPT could answer based what the doctor provided,” Cyberhaven said. Similarly, if a third party were to query ChatGPT about the company’s strategic priorities fo the year, ChatGPT could answer based on the information the executive provided when asking for a slideshow. As far back as Dec. 2020, a research paper by researchers at Cornell University and other universities, showed precisely how adversaries could extract training data from large language models.

Concerns over the trend has already prompted financial services behemoth JP Morgan to prohibit ChatGPT use at work. Others that have imposed a similar restriction include Walmart, Amazon and Microsoft . It’s only a matter of time before many others follow suit.

Detecting what data workers are inputting into ChatGPT won’t be easy for two reasons, according to Cyberhaven:

  • Workers are cutting and pasting the data into the chatbot, so tools for detecting unauthorized data uploads and transfers won’t spot the activity.
  • Most data leak prevention tools cannot spot confidential data inputted into ChatGPT because such don’t often doesn’t have any recognizable pattern.

Using ChatGPT to write malware

Just as software developers can use ChatGPT to write code, bad actors can use it to develop malware. Generative artificial intelligence technologies such as ChatGPT give individuals with basic programming and coding skills the ability augment their skills and more quickly develop malware relevant to their objectives.

Why it matters: ChatGPT significantly lowers the bar for malware writing. Though the technology is unlikely to be of much help to absolute coding newbies, it opens the door for relatively low-skilled malware developers to create more sophisticated tools—and iterate on them much faster as well.

Check Point Technologies in January reported at least three examples where bad actors demonstrated in underground forums, samples of malicious code they had developed using ChatGPT. One of the authors used ChatGPT to develop a Python-based information stealer capable of searching for, copying and stealing Office documents, PDFs, images and other common data types. He also used the chatbot to try and recreate different versions of known malware strains.

In another instance, a malware author used ChatGPT to develop a Python script that could encrypt and decrypt data using the Twofish and Blowfish cryptographic algorithms. What made the effort notable was the fact that the malware author claimed he had very little coding skills.

Check Point’s third example involved a cybercriminal who use ChatGPT to create an entire marketplace for trading a variety of illegal products including bank account and credit card data, malware products, ammunition, drugs and other illegal products. The malware author got ChatGPT to write code for getting real-time cryptocurrency prices that participants in the marketplace could use to make transactions.

Earlier, in December 2022, Check Point’s own researchers showed how they had harnessed ChatGPT’s capabilities to develop a full attack chain from initial infection via a phishing mail to running a reverse shell on a victim system. They also showed how an attacker could drop a backdoor on systems that could run malicious scripts that ChatGPT generates on the fly.

ChatGPT-based malware distribution and phishing scams

Threat actors are taking advantage of the enormous public interest around ChatGPT to distribute malware and carry out various other attacks.  

Cyble and other security vendors have reported seeing several instances where a bad actor used ChatGPT-themed lures to try and trick users into opening malicious attachments or clicking on links to phishing sites.

Here are three recent examples:

  • A threat actor set up an unofficial ChatGPT account on Facebook that featured a mix of content, including legitimate videos and posts, about the AI tool and another similar technology called Jukebox. An analysis of the content by Cyble’s researchers showed some of the content on the Facebook page to contain links to typosquatted phishing pages that spoofed ChatGPT and tricked users into downloading malware on their systems.
  • More recently, BitDefender reported on one campaign where a threat actor set up a spoofed ChatGPT site to try and trick users into paying for fake financial opportunities related to the AI technology.  The scam begins with a phishing email that containing a link to the fake CjatGPT site. The lookalike site informs users who land there about ChatGPT’s purported ability to analyze financial markets and make transactions that could net individual’s a steady income of around $500. Victims are then prompted via an interactive chat for information about their income, willingness and ability to invest and on other topics. The scammers have gone so far as to establish what appears to be a full-fledged call center for individuals who follow through on the prompts and provide a phone. The goal of the scam is to trick users into parting with their personal information, and investing substantial amounts of money so ChatGPT can invest it for them.
  • Security vendors expect bad actors to increasingly use ChatGPT to craft phishing emails. With the right prompt, the results can often be better than the typical spelling-error-prone, grammatically incorrect phishing emails that lands in user email inboxes these days. That doesn’t mean that are effective, however. HoxHunt recently conducted an experiment where the company sent phishing emails to 53,127 users. Some of the emails were human generated while ChatGPT generated the others. HoxHunt found that, for the moment at least—the human-generated phishing emails generated more clicks that the ones that ChatGPT generated.

Fake ChatGPT apps and Chrome browser extensions

Cyble in February 2023 reported finding as many as 50 fake and malicious apps with the ChatGPT icon that threat actors are using to distribute Android malware, spyware, adware and potentially unwanted apps. One example is a fake ChatGPT app for carrying out billing fraud. The app signs up Android users for premium subscriptions without their knowledge and sends an SMS message from their device confirming the transaction. Cyble found another app masquerading as ChatGPT installing malware on user devices for stealing call logs, media files, contacts, SMS messages and other data from compromised devices.

In another campaign between late February and early March 2023, a threat actor hijacked potentially thousands of individual and business Facebook accounts via a malicious Chrome browser extension ostensibly for ChatGPT. The browser extension, named “Quick access to Chat GPT”, was available on Google’s official Chrome store and heavily promoted on Facebook. It promised users quick access to ChatGPT. While it did that—via ChatGPT’s API—the browser extension also installed a backdoor that allowed the malware author complete access to the user’s Facebook account details (if the victim had a Facebook account). The backdoor also had functionality for hijacking business Facebook accounts and using them to place malicious promos and ads on Facebook using the hijacked account’s advertising dollars.