Cybercrime communities face skills gap despite rise of AI tools
Analysis of underground cybercrime forums shows AI is being tested by cybercriminals, but real-world impact remains limited by technical skill shortages.
A major study by researchers from the universities of Cambridge, Edinburgh, and Strathclyde, published by the Centre for Emerging Technology and Security at the Alan Turing Institute, suggests cybercriminals are still struggling to use AI effectively in their operations despite widespread attention around tools such as ChatGPT.
Researchers analysed more than 100 million posts from underground and dark web forums to assess how AI is being adopted within cybercrime communities.
The research, carried out by the universities of Edinburgh, Strathclyde, and Cambridge using the CrimeBB database, found that most offenders lack the technical skills and resources needed to integrate AI into criminal activity. Rather than lowering barriers to entry, AI tools benefit already skilled actors far more than inexperienced ones.
The analysis shows AI is used most successfully in already highly automated areas, such as social media bots linked to harassment and fraud, as well as in efforts to mask patterns that cybersecurity systems might otherwise detect. While experimentation is increasing, the researchers found little sign that AI is delivering a broad or transformative boost to overall cybercriminal capability. Mainstream chatbot guardrails were also found to be limiting harmful use in practice.
The researchers argue that the more immediate concern for industry is not dramatic AI-enabled innovation among cybercriminals, but insecure adoption of AI within legitimate organisations. They point to risks from poorly secured agentic AI systems and from AI-generated ‘vibecoded’ software being deployed without adequate safeguards.
Why does it matter?
The findings challenge a common assumption that generative AI is already giving cybercriminals a major operational advantage. Instead, the more immediate and scalable risk may come from companies deploying insecure AI systems faster than they can secure them. That shifts attention away from worst-case speculation about criminal innovation and towards a more practical cyber policy question: whether organisations are introducing new AI-enabled vulnerabilities into mainstream digital infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
