

The rise of AI has opened doors to innovation and also to exploitation. In the wrong hands, this tool has become a digital weapon, capable of breaking into organisations, manipulating systems, and even bypassing laws, all while its creator watches from afar.
AI this, AI that; ChatGPT this, ChatGPT that. It seems like not even a single day in the past year has passed without hearing the words “Artificial Intelligence” or “ChatGPT.”
Whether it is social media trends or workplace tools, AI has slowly spread its way across all spectrums of life.
While many worry that AI will replace human jobs, some professions have found ways to work alongside it, using it to their advantage.
As AI becomes more powerful and accessible, it is also drawing the attention of criminals.
According to a BBC report, the US-based AI company Anthropic revealed that hackers had “weaponised” AI to carry out sophisticated cyber attacks.
The creators of the chatbot Claude said their tools were misused “to commit large-scale theft and extortion of personal data.”
Claude was even used to help write code that facilitated these attacks.
In another case, North Korean scammers made use of AI to fraudulently secure remote jobs at Fortune 500 companies.
The company said that they disrupted the threats, reported the cases to authorities, and improved their detection tools.
Nonetheless, these incidents highlight a concerning fact: AI can be a powerful tool for cybercrime.
As AI becomes more advanced, using it to write code for hacking has grown popular.
Anthropic described one case as an example of “vibe hacking.”
Hackers made use of AI to generate code that could target at least 17 organisations, including some government bodies.
AI helped them in “making both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands,” even going as far to suggest ransom amounts for the victims.
With all the developments going into the technology, experts warn that AI could drastically reduce the time needed to exploit cybersecurity vulnerabilities. They stress the need for proactive, preventative measures.
The Lazarus Heist provides another grim example of AI misuse.
In this case, “North Korean operatives” used AI to create fake profiles and apply for remote jobs at Fortune 500 companies. While such a fraud is not anything new, the integration of AI shows “a fundamentally new phase for these employment scams,” reported the BBC.
The AI wrote applications, translated messages, and even generated code once the fraudsters were employed.
“North Korean workers are sealed off from the outside world, culturally and technically, making it harder for them to pull off this subterfuge,” said co-host of the BBC podcast The Lazarus Heist, Geoff White.
“Agentic AI can help them leap over those barriers, allowing them to get hired. Their new employer is then in breach of international sanctions by unwittingly paying a North Korean.”
Yet White noted that AI “isn't currently creating entirely new crimewaves” and that “a lot of ransomware intrusions still happen thanks to tried-and-tested tricks like sending phishing emails and hunting for software vulnerabilities.”
Agentic AI refers to where the technology operates autonomously without any human intervention and has been hailed as the next major step in AI development. But it also opens new doors for misuse.
In a famous case, Mark Read, CEO of WPP, the world’s largest advertising group, became the target of a deepfake scam that involved an AI-generated voice clone.
Fraudsters created a WhatsApp account using a publicly available photo of Read and arranged a Microsoft Teams meeting that appeared to be with him and another senior executive from WPP, according to The Guardian.
During the meeting, the scammers used an AI voice clone of Read, along with YouTube videos of him, and even impersonated him off-camera through the meeting's chat window.
The scam, which ultimately failed, tried to convince an "agency leader" to set up a new business in order to steal money and personal information.
Although the scam was unsuccessful, it serves as yet another scary reminder of the growing dangers of how AI could also be used by fraudsters in the days to come.
Former Google CEO Eric Schmidt has also warned of AI’s vulnerabilities. He talked about “the bad stuff that AI can do.”
When asked whether AI could be more destructive than nuclear weapons during a summit, he replied, “Is there a possibility of a proliferation problem in AI? Absolutely.”
Schmidt highlighted the danger of AI falling into the wrong hands: “There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone.
"All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
AI systems remain vulnerable to attacks such as prompt injections and jailbreaking. In a prompt injection attack, hackers hide malicious instructions in user inputs or external data, such as web pages or documents, tricking AI into performing actions it shouldn’t such as sharing private information or running harmful commands.
Meanwhile, Jailbreaking manipulates the AI’s responses to overrides its safety guidelines and generate dangerous content/responses.
Shortly after OpenAI released ChatGPT in 2023, users exploited the technique to override the guidelines.
Some created an alter-ego called DAN, short for “Do Anything Now”, which was made to comply by threatening the chatbot with death if it did not follow the orders given by the users.
DAN was able to generate responses ranging from instructions for committing crimes to listing positive qualities of Adolf Hitler.
These examples underline the real risks powerful AI tools pose to potential victims of cybercrime.
While AI offers remarkable opportunities for learning, creativity, and innovation, it remains in its infancy and its rapid development means that cybercriminals may exploit its capabilities, creating new challenges for security and law enforcement.
It remains to be seen how the advancement of AI will shape the digital landscape—for better or for worse.