Hackers exploit AI: The hidden dangers of open-source models
Businesses often lack policies to safeguard against AI vulnerabilities.
As AI adoption grows, security experts warn that malicious actors are finding new ways to exploit vulnerabilities in open-source models.
Yuval Fernbach, CTO of machine learning operations at JFrog, notes that hackers are increasingly embedding harmful code within AI models, making it easier to steal information, manipulate outputs, or disrupt services.
A recent study by JFrog and Hugging Face found that of over one million AI models analyzed, 400 contained malicious code—roughly a 1% chance of encountering a tainted model.
However, the risk has escalated: while the number of available AI models has tripled, attacks have increased sevenfold.
The widespread use of open-source models, often chosen over costly proprietary alternatives, exacerbates security concerns.
Many companies lack proper oversight, with 58% of surveyed firms admitting to having no formal policy for vetting AI models. Meanwhile, banks and other industries worry that AI’s rapid evolution outpaces their ability to implement safeguards.
With agentic AI poised to automate decision-making, businesses face an urgent need to strengthen AI security measures before vulnerabilities lead to significant financial and operational consequences.
For more information on these topics, visit diplomacy.edu.