The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation and several other academic and civil society organisations released a report on 'The malicious use of artificial intelligence: forecasting, prevention, and mitigation'. The report outlines security threats that could be generated by malicious use of artificial intelligence (AI) systems in three main areas: digital security (e.g. using AI to automate tasks involved in carrying out cyber-attacks), physical security (e.g. using AI to carry out attack with drones or other physical systems), and political security (e.g. using AI to carry out surveillance, persuasion, and deception). Several high-level recommendations are made on how to better forecast, prevent, and mitigate such threats: strengthened collaboration between policymakers and researchers; researchers and engineers in AI to take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities; identifying best practices in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI; and expanding the range of stakeholders and domain experts involved in discussions of these challenges.
Cybersecurity is among the main concerns of governments, Internet users, technical and business communities. Cyberthreats and cyberattacks are on the increase, and so is the extent of the financial loss.
Yet, when the Internet was first invented, security was not a concern for the inventors. In fact, the Internet was originally designed for use by a closed circle of (mainly) academics. Communication among its users was open.
Cybersecurity came into sharper focus with the Internet expansion beyond the circle of the Internet pioneers. The Internet reiterated the old truism that technology can be both enabling and threatening. What can be used to the advantage of society can also be used to its disadvantage.