Generative AI’s emergence has prompted experts to discuss potential consequences, disruptive changes, and the speed of their arrival. Ransomware and malware creation have not been immune to this trend, as large-language machine learning models are already being misused to develop harmful code that can damage others’ computers. But is AI-generated ransomware an imminent danger?
AI Ransomware: A Concept in Its Infancy
At present, the answer is no. IBM Research introduced a concept known as “DeepLocker,” but AI-based ransomware has not been observed in the wild, and no machine-generated malware attacks have been reported.
Despite ChatGPT’s input restrictions to prevent the development of hazardous code, researchers have managed to make it create ransomware-like functions, such as asymmetric encryption and ransom note generation.
However, it’s improbable that ChatGPT-generated ransomware will make its way into the cybercrime world. The resulting malware does not surpass what can be achieved using other publicly accessible tools found on the dark web, hacker forums, or code repositories like GitHub.
Even if someone dedicated more time and effort to manipulate ChatGPT into enhancing its ransomware capabilities, the AI tool’s inherent limitations render it too unreliable and cumbersome for generating functional and effective malware.
The Future of AI-Powered Attacks
While current limitations hinder AI-generated ransomware from becoming a mainstream threat, experts believe this situation won’t last indefinitely. It’s only a matter of time before alternative language or coding models are developed, or rogue organizations and research teams create malicious tools without ethical considerations.
AI-driven ransomware could potentially automate various tasks involved in ransomware attacks, including:
Performing penetration testing on target networks and systems to discover security vulnerabilities and exploit them to plant a backdoor or shellcode.
Creating polymorphic, cross-platform binaries that continually modify their code to evade detection.
Developing worm-like propagation capabilities for spreading across computers and networks with different security measures.
Adapting social engineering attacks based on preliminary data collected through automated scraping and open-source intelligence (OSINT) research to achieve better initial compromise and network intrusion rates.
Cybersecurity experts caution that ransomware groups have amassed considerable wealth, allowing them to invest in cutting-edge AI tools to automate and refine their attacks.
Some even suggest that ransomware gangs might be willing to hire AI/ML specialists and offer them significantly higher salaries to develop custom tools specifically designed for ransomware attacks.
Cybersecurity professionals estimate that AI-driven ransomware could appear in the wild within the next 6-12 months.
MonsterCloud is vigilantly monitoring the situation and working with partners to assess how AI can contribute to defense and data recovery efforts, always striving to stay one step ahead of malicious actors.