Hackers were quick to use ChatGPT for malicious purposes, as experts from cybersecurity firm Check Point have just discovered.
Since it opened to the general public at the beginning of December, ChatGPT has been touted as the next internet revolution. the chatbotchatbot amuses, impresses and is regularly pushed to its limits. We now know that he knows how to lie with aplomb, but what we also discover is that he can also work on the dark side.
The cybersecurity company Check Point Research discovered that theIAIA of OpenIA has been used by hackers to design malicious code. The researchers had previously tested ChatGPT to create an entire chain of hacks, starting with an inducing phishing email, then injecting malicious code. The team started from the observation that, if they had had this idea, the cybercriminals too.
By pushing their analysis on the large communities of hackers, they were able to confirm that the first cases of malicious use of ChatGPT are in progress. The advantage is that these are not the most experienced hackers, but cybercriminals who do not have special development skills. Suffice to say, these aren’t fancy malware creations, but given the potential, it may well be that AI will soon be used in advanced hacking tool developments.
On hacker forums, participants are actively testing ChatGPT to recreate malware. The lab experts have thus dissected a script created by ChatGPT which searches for common file formats on a hard disk and which will copy, compress and send them to a waiterwaiter controlled by pirates.
Hackers hijack ChatGPT
In another example, a Java code was used to download a network client and force its execution via the Windows administration console. The script could then download any malware. But there is more, with writing a script PythonPython who knows how to operate encryptionencryption complex. In principle, the creation of this code is neither good nor bad, but on a hacker forum, we can legitimately assume that this encryption system could be integrated into ransomware.
Finally, the researchers also saw discussions and tests that hijacked ChatGPT, not to generate malicious code, but to create an illicit automated trading platform for the Dark Web. Overall, what emerges from these investigations is that cybercriminals are doing much like everyone else in all areas with the ChatGPT chatbot. They discuss and test the AI to see how it could help them and if its performance is useful for their evil work.
For its part, like any computer program, the AI only does what it is asked to do. It does not necessarily do it well, as can be seen from a study by Stanford University in the United States. His findings show that when developers use AIs to write code, they tend to generate flaws. They wouldn’t necessarily exist if a human had composed it.