Artificial intelligence

Artificial intelligence (AI) might sound like something from a science fiction movie in which robots are ready to take over the world. While such robots are purely fixtures of science fiction (at least for now), AI is already part of our daily lives, whether we know it or not.

Think of your Google inbox: Some of the e-mails you receive end up in your spam folder, while others are marked as ‘social’ or ‘promotion’. How does this happen? Google uses AI algorithms to automatically filter and sort e-mails by categories. These algorithms can be seen as small programs that are trained to recognise certain elements within an e-mail that make it likely to be a spam message, for example. When the algorithm identifies one or several of those elements, it marks the e-mail as spam and sends it to your spam folder. Of course, algorithms do not work perfectly, but they are continuously improved. When you find a legitimate e-mail in your spam folder, you can tell Google that it was wrongly marked as spam. Google uses that information to improve how its algorithms work. 

AI is widely used in Internet services: Search engines use AI to provide better search results; social media platforms rely on AI to automatically detect hate speech and other forms of harmful content; and, online stores use AI to suggest products you are likely interested in based on your previous shopping habits. More complex forms of AI are used in manufacturing, transportation, agriculture, healthcare, and many other areas. Self-driving cars, programs able to recognise certain medical conditions with the accuracy of a doctor, systems developed to track and predict the impact of weather conditions on crops – they all rely on AI technologies. 

As the name suggests, AI systems are embedded with some level of ‘intelligence’ which makes them capable to perform certain tasks or replicate certain specific behaviours that normally require human intelligence. What makes them ‘intelligent’ is a combination of data and algorithms. Let’s look at an example which involves a technique called machine learning. Imagine a program able to recognise cars among millions of images. 

First of all, that program is fed with a high number of car images. Algorithms then ‘study’ those images to discover patterns, and in particular the specific elements that characterise the image of a car. Through machine learning, algorithms ‘learn’ what a car looks like. Later on, when they are presented with millions of different images, they are able to identify the images that contain a car. This is, of course, a simplified example – there are far more complex AI systems out there. But basically all of them involve some level of initial training data and an algorithm which learns from that data in order to be able to perform a task. 

Some AI systems go beyond this, by being able to learn from themselves and improve themselves. One famous example is DeepMind's AlphaGo Zero: The program initially only knows the rules of the Go game, however it then plays the game with itself and learns from its successes and failures to become better and better. 

Going back to where we started: Is AI really able to match human intelligence? In specific cases – like playing the game of Go – the answer is ‘yes’. That being said, what has been coined as ‘artificial general intelligence’ (AGI) – advanced AI systems that can replicate human intellectual capabilities in order to perform complex and combined tasks – does not yet exist. Experts have divided opinions on whether AGI is something we will see in the near future, but it is certain that scientists and tech companies will continue to develop more and more complex AI systems. 

What are the policy implications of AI? Applying AI for social good is a principle that many tech companies have adhered to. They see AI as a tool that can help address some of the world’s most pressing problems, in areas such as climate change and disease eradication. 

The technology and its many applications certainly carry a significant potential for good, but there are also risks. Accordingly, the policy implications of AI advancements are far‐reaching. While AI can generate economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security are also in focus. As there continue to be innovations in the field, more and more stakeholders are calling for AI standards and AI governance frameworks to help ensure that AI applications have minimal unintended consequences.

Ms Sorina Teleanu

Independent consultant

Ms Sorina Teleanu is an independent consultant with expertise in Internet governance and digital policy. She currently serves as the Chair of the Executive Committee of the South Eastern European Dialogue on Internet Governance (SEEDIG), a sub-regional initiative launched in 2015. She is also a member of the Multistakeholder Advisory Group (MAG) which provides advice to the UN Secretary-General on the programme and schedule of the Internet Governance Forum (IGF) meetings. Sorina previously worked with DiploFoundation as Digital Policy Senior Researcher, and with the Romanian Parliament, as advisor dealing with ICT-related legislation and policies. Between 2011 and 2016, she served as the alternate representative of the Romanian Government to ICANN’s Governmental Advisory Committee. She has been a long-time volunteer to the IGF Secretariat and EuroDIG, and also worked as a fellow and consultant at the IGF Secretariat. Sorina is an alumna of DiploFoundation’s Internet Governance Capacity Building Programme, ICANN’s Fellowship Programme, the Internet Society’s Next Generation Leadership programme, and the European Summer School on Internet Governance. Her educational background is in international relations and European studies, having received a bachelor and a master's degree from the Lucian Blaga University in Sibiu, Romania.

Share on FacebookTweet