Solutions sought for more secure AI systems
In the USA, the Intelligence Advanced Research Project Activity (IARPA), the intelligence community research branch, has announced plans to run a TrojAI programme to ‘seek innovative solutions for the detection of Trojans in artificial intelligence (AI)’. In a draft Broad Agency Announcement (BAA), the IARPA invited interested parties to provide comments on the proposed programme, and a call for proposals is expected to be launched at a later stage. AI systems rely on data and machine learning to perform certain functions. But attackers can ‘disrupt the training pipeline and insert Trojan behaviour into the AI’. Such manipulation of the training data can cause the AI system to generate misleading or inaccurate results. Under the TrojAI programme, researchers will look for solutions to combat Trojan attacks by inspecting AI for trojans. The IARPA also posted a draft BAA for the Secure, Assured, Intelligent Learning Systems (SAILS) programme to seek solutions for creating machine learning (ML) and AI models robust to attacks against privacy. The solutions would allow the ML/AI model developers to trust that their trained models will not inadvertently reveal sensitive information.