AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82

10 Oct 2023 08:00h - 09:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kevin Luca Zandermann

Artificial Intelligence (AI) has the potential to revolutionize public services, particularly in personalized healthcare and education. Examples from Finland and the UK demonstrate how AI has successfully integrated into law enforcement practices, highlighting its transformative impact on public service delivery.

Regulatory bodies should seriously consider incorporating AI tools into their processes. Finland’s use of AI in cartel screening and the UK Competition and Markets Authority’s development of an AI tool for automatic merger tracking serve as successful examples, streamlining operations and enhancing efficiency.

However, it is crucial to strike the right balance between automated AI-powered steps and human oversight. Effective regulation requires the integration of both elements. The Finnish Authority, for instance, allows a stage of human oversight even after AI detection, ensuring decisions rely on well-informed processes. Similarly, Article 14 of the European Union’s AI Act emphasizes the importance of human oversight in regulating AI.

While there are potential benefits, the use of AI in regulation, particularly with Large Language Models (LLMs), also carries risks. A Stanford survey reveals that only one out of twenty-six competition authorities mentions using an LLM-powered tool, highlighting the need for cautious implementation and consideration of potential implications.

Kevin Luca Zandermann suggests regulators engage in retrospective exercises with AI, reviewing well-known cases to identify previously unnoticed patterns and enhance regulatory processes. Clear and comprehensive AI legislation, particularly regarding human oversight, is crucial. The lack of clarity in the EU’s current AI legislation raises concerns and emphasizes the need for further development.

Despite limited resources, conducting retrospective exercises and developing Ex-officio tools remain crucial, especially given the impending AI legislation. These exercises help regulators adapt to the evolving technological landscape and effectively integrate AI into their practices.

In conclusion, AI has the potential to transform public services, but its implementation requires careful consideration of human oversight. Successful integration in law enforcement and regulation in Finland and the UK serves as evidence of AI’s capabilities. However, risks associated with technologies like LLMs cannot be underestimated. Regulators should engage in retrospective exercises, work towards comprehensive AI legislation, and address potential concerns to ensure responsible and effective AI implementation.

Sally Foskett

The Australian Competition and Consumer Commission (ACCC) is taking proactive measures to address consumer protection issues. They receive hundreds of thousands of complaints annually and are attempting to automate the process of complaint analysis using artificial intelligence (AI). This move aims to improve their efficiency in handling consumer issues and ensure fair treatment for consumers. Additionally, the ACCC is exploring the collection of new information such as deceptive design practices, which will enhance their understanding of consumer concerns and enable them to better protect consumers’ rights.

Understanding algorithms used in consumer interactions is another key area of focus for the ACCC. Regulators must be able to explain how these algorithms operate to ensure transparency and fairness in the marketplace. To achieve this, the ACCC gathers information such as source code, input/output data, and business documentation. By comprehending and being able to scrutinize these algorithms, they can better identify potential issues related to consumer protection and take the necessary enforcement actions.

The ACCC is also supportive of developing consumer-centric AI. They recognize the potential of AI in helping consumers navigate the market and make informed decisions. This aligns with the Sustainable Development Goal 9: Industry, Innovation and Infrastructure, which encourages the use of innovative technology to drive economic growth and promote industry development. The ACCC believes that by leveraging AI technology, consumers can benefit from more personalized and accurate information, leading to better economic outcomes and increased satisfaction.

In terms of data gathering, the ACCC acknowledges the importance of considering various sources. They emphasize going back to the basics and critically assessing the sources of data. By ensuring that the data used for analysis is accurate, reliable, and representative of the market, the ACCC can make more informed decisions and take appropriate actions to safeguard consumer interests. The ACCC is exploring the possibility of obtaining data from data brokers, hospitals, and other government departments. Additionally, they plan to make better use of social media platforms to detect and address consumer issues promptly.

It is evident that the ACCC advocates for utilizing data from different sources in their decision-making and enforcement activities. They suggest using data from other government departments, data brokers, hospitals, and social media to gain a comprehensive understanding of consumer trends, behaviours, and concerns. This multi-source data approach allows the ACCC to identify emerging issues, better protect consumers, and ensure fair competition in the marketplace.

In conclusion, the ACCC is actively pursuing proactive methods of detecting and addressing consumer protection issues. They are leveraging AI to automate complaint analysis, enhancing their understanding of algorithms used in consumer interactions, and supporting the development of consumer-centric AI. The ACCC recognizes the importance of considering various sources of data and is exploring partnerships and collaborations to access relevant data. By adopting these strategies, the ACCC aims to enhance consumer protection, promote fair business practices, and contribute to sustainable economic growth.

Christine Riefa

The use of artificial intelligence (AI) in consumer protection is seen as a potential tool, but experts caution that it is not a panacea for all the problems faced in this field. While 40 to 45% of consumer authorities surveyed are currently using AI tools, it is important to note that there are other technical tools being employed for consumer enforcement that are not AI-related.

One of the main concerns raised is the potential legal challenges that consumer protection agencies may face when using AI for enforcement. Companies being investigated may challenge the use of AI, and this issue has not been extensively studied yet. However, it has been observed that agencies with a dual remit, not solely dedicated to consumer protection, tend to have better success in implementing AI solutions.

Consumer law enforcement is considered to be lagging behind other disciplines, but efforts are being made to catch up. It is acknowledged that there is still work to be done in terms of classification and normative work in AI to ensure that all stakeholders are on the same page regarding what AI is and what it entails.

Collaboration among different stakeholders is deemed crucial for achieving usable results in consumer protection. It is emphasized that consumer agencies need to work together in unison to effectively address the challenges faced in this field.

Furthermore, it is argued that AI should not only be used for detecting harmful actions but also for preventing them. Consumer law enforcement needs to undergo a transformative shift in its approach. AI can be leveraged more effectively by adopting a prescriptive method that focuses on preventing harm to consumers rather than solely relying on detection.

In conclusion, while AI shows promise in consumer protection, it is not a solution that can address all challenges on its own. Consumer protection agencies need to consider potential legal challenges, collaborate with other stakeholders, and focus on leveraging AI in a transformative way to ensure effective consumer protection.

Martyna Derszniak-Noirjean

Artificial intelligence (AI) is reshaping the consumer protection landscape, presenting both benefits and challenges. It is vital to examine the implications of AI in consumer protection and determine the necessary regulations to ensure a fair and balanced environment.

AI provides an economic technological advantage over consumers, giving firms and entrepreneurs the potential to exploit the system and engage in unfair practices. This raises concerns about the need for effective protections to safeguard consumer rights. Therefore, there is a critical need to discuss the use of AI in consumer protection. The sentiment surrounding this argument is neutral, reflecting the requirement for comprehensive examination and evaluation.

Understanding the extent of regulation required for AI is a complex task. AI has the potential to both disadvantage and assist consumers. Striking the right balance between regulating AI, innovation, and economic growth is challenging. This argument underscores the importance of carefully considering the implications of excessive or inadequate regulation to ensure a fair marketplace. The sentiment remains neutral, highlighting the ongoing debate regarding this issue.

However, AI also offers opportunities to enhance the efficiency and effectiveness of consumer protection agencies. Consumer protection agencies are exploring the use of AI in investigating unfair practices, and they are developing AI tools to support their efforts. This signifies a positive sentiment towards leveraging AI for consumer protection. It emphasizes the potential of AI to augment the capabilities of consumer protection agencies, enabling them to better safeguard consumers’ rights.

Based on the analysis provided, AI is significantly transforming consumer protection. It is crucial to strike the right balance between regulation and innovation to ensure fairness and responsible consumption. While concerns regarding potential unfair practices exist, AI also presents an opportunity to enhance the effectiveness of consumer protection agencies. Overall, a neutral sentiment prevails, emphasizing the need for ongoing discussions and evaluations to successfully navigate the complexities of AI in consumer protection.

Piotr Adamczewski

The use of artificial intelligence (AI) in consumer protection agencies was a key topic of discussion at the ICEPAN conference. It was highlighted that AI is already being utilized by many agencies, and its development is set to continue. The main argument put forward is that AI is essential for detecting both traditional violations and new infringements that are connected to digital services.

To further explore the advancement of AI tools in consumer protection, a panel of experts was invited to contribute their perspectives. These experts included professors, representatives of international organizations, and enforcement authorities. Professor Christine Rifa conducted a survey that shed light on the current usage of AI by consumer protection agencies. This survey likely provided valuable insights into the challenges, benefits, and potential for improvement in AI implementation.

The UOKiK (Poland’s Office of Competition and Consumer Protection) recognized the potential of AI for enforcement actions and initiated a project specifically focused on unfair clauses. The project was born out of a need for efficiency and was supported by an existing database of 10,000 established unfair clauses. Training AI to detect such clauses in standard contract terms proved to be particularly useful, as the process is time-consuming and labor-intensive for human agents.

The UOKiK is also actively working on a dark patterns detection tool. Dark patterns refer to deceptive elements and tactics used in e-commerce user interfaces. The goal is to proactively identify and address violations rather than relying solely on consumer reports. Creating a detection tool specifically targeted at dark patterns aligns with the objective of ensuring responsible consumption and production.

In addition, the UOKiK is preparing a white paper that will document its experiences and insights regarding the safe deployment of AI software for law enforcement. The white paper aims to share knowledge and address potential problems that the UOKiK has encountered. This document is a valuable resource for other agencies and stakeholders interested in implementing AI technology for law enforcement purposes. The expected release of the white paper next year indicates a commitment towards transparency and information sharing within the field.

Overall, the expanded summary highlights the increasing importance of AI in consumer protection agencies. The discussions and initiatives at the ICEPAN conference, the survey conducted by Professor Christine Rifa, the projects carried out by the UOKiK, and the upcoming white paper all emphasize the potential benefits and challenges associated with deploying AI in the realm of consumer protection. The insights gained from these endeavors contribute to ongoing efforts towards more effective and efficient law enforcement in the digital age.

Melanie MacNeil

AI has the potential to empower consumers and assist consumer law regulators in addressing breaches of consumer law. Consumer law regulators have started using AI tools to increase efficiency in finding and addressing potential breaches of consumer law. These tools can support preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. For example, the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyze consumer contracts and identify unfair contract terms.

Similarly, regulators are utilizing AI to detect and address product safety issues. The Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall using AI. Additionally, AI technology and software enable early diagnosis of product safety issues in smart devices. These advancements contribute to safer consumer products and protect consumers from potential harm.

AI not only helps with consumer law and product safety but also provides opportunities to nudge consumers towards greener choices. The German government has funded a digital tool that uses AI to provide consumers with a series of facts about how to reduce their energy consumption. This empowers consumers to make more environmentally conscious decisions. Additionally, AI can assist consumers in making green choices by breaking through the information overload on green labels, helping them better understand the environmental impact of their choices.

However, there are concerns about new and emerging risks associated with AI and new technology in relation to consumer health and safety. The OECD is currently undertaking a project to assess the impact of digital technologies in consumer products on consumer health and safety. The focus is on understanding and addressing product safety risks through safety design. It is important to address and mitigate these risks to ensure the well-being and safety of consumers.

Regulators are often criticized for being slow to address problems compared to businesses, which are not as restricted. There is a need for regulators to adapt and keep pace with technological advancements to effectively address consumer issues. Collaboration and sharing of learnings are crucial in moving quickly to address issues. By working together and sharing knowledge, stakeholders can collectively address the challenges posed by AI and emerging technologies.

In conclusion, AI has the potential to transform the consumer landscape by empowering consumers and assisting regulators in addressing breaches of consumer law and product safety. However, there is a need to carefully navigate the risks associated with AI and ensure consumer health and safety. Collaboration and knowledge-sharing are crucial in effectively addressing the challenges posed by emerging technologies. By embracing AI’s potential and working together, stakeholders can create a consumer environment that is fair, safe, and sustainable.

Angelo Grieco

The European Commission has prioritised the development and use of AI-powered tools for investigating consumer legislation breaches. To assist EU national authorities, they have established the Internet Investigation Laboratory (eLab), which utilises artificial intelligence to conduct extensive evaluations of companies and their practices. eLab employs web crawlers, AI-powered tools, algorithms, and analytics to aid in large-scale reviews. This demonstrates the European Commission’s commitment to consumer protection and leveraging AI technology.

Behavioural experiments are used to assess the impact of commercial practices, specifically targeted advertising cookies, on consumers. These experiments play a crucial role in enforcing actions against major businesses and ensuring consumer protection. They allow regulatory authorities to thoroughly examine the effects of various practices and address any potential harm.

In order to investigate and mitigate risks associated with AI-based services, a proactive approach is necessary. Investigations are currently underway to assess the hazards posed by AI-powered language models that generate human-like text responses. These models have the potential to manipulate information, spread misleading content, perpetuate biases, and contain errors. Identifying and addressing these risks is crucial for responsible and ethical use of AI.

Angelo Grieco is leading efforts to enhance the use of AI in investigations, with a focus on compliance monitoring for scams, counterfeiting, and misleading advertising. Grieco aims to improve the efficiency and effectiveness of investigations through the use of advanced technology. Additionally, there is a recognition of the importance of improving case handling processes and making evidence gathering more streamlined. Grieco aims to develop tools that can accommodate jurisdiction-specific rules and ensure adherence to legal procedures.

In summary, the European Commission is committed to developing and utilising AI-powered tools for investigating consumer legislation breaches. The Internet Investigation Laboratory (eLab) demonstrates this dedication by employing AI technology to aid in comprehensive evaluations of companies and practices. Behavioural experiments are used to assess the impact of commercial practices on consumers. Proactive measures are being taken to investigate and mitigate risks associated with AI-based services. Angelo Grieco is actively working to enhance the use of AI in investigations, with a focus on compliance monitoring and efficient case handling. These initiatives reflect a commitment to protecting consumer rights and ensuring effective and ethical investigations.

Session transcript

Martyna Derszniak-Noirjean:
before I will start. It would make a little bit sense that you can see me as well, so let me see. Otherwise, please, the technical assistance, if you could try and help me with this, that would be wonderful. Either way, I will not take more time with my technical issues. Welcome everybody, and it’s really great to be here for the third time at the Internet Governance Forum, so we are really happy that also this year we can alert the forum to consumer protection issues, and that this year as well we have a wonderful panelist with us, so welcome everybody and thanks for giving us this opportunity. I will start saying one of the biggest reasons that you have heard in the last times, and also one of the most heard things these days, which is that AI has been changing our lives, and I’m pretty sure that you guys are all tired hearing this, but even though we’ve heard it so many times, it doesn’t make it any less important, so we need to discuss and we need to converge around this issue, and this is why we have organized this panel, and now the question is why is it important to discuss AI in the context of consumer protection? For us, consumer protection authorities and many panelists who also have to do with consumer protection, the issue basically is that firms and entrepreneurs have economical technological advantage over consumers, which means that they can use AI to have greater possibilities of doing unfair practices against consumers. This is one option, of course, AI can also be used for good purposes, and our task as consumer protection enforcers and all stakeholders that are active in the area of consumer protection, our task is to understand our task is to understand to what extent we should curb AI used by companies, and to what extent we should try and allow it to flourish to actually assist consumers, for example, by having a better choice of products, so this is a big challenge for us, consumer protection stakeholders, and we need discussions, we need to speak, we need to engage with this topic, this is why we think that it’s very important to continue discussing it, even though we are already discussing it a lot, and as an emerging topic, we really need to have a wider conversation about it, and IGF is a great forum for that. We have also internal stakeholders around here, people who are concerned not only with consumer protection as we are, but also with other things, who are much more knowledgeable about different technologies, and how they are being used online, so it’s great, and we hope that we’ll have a wider discussion here, and I hope, and I’m pretty sure Piotr will also be able to follow up on this with many of the participants that are fortunate enough to be there in person, and one final thing of introduction is, except for trying to understand the impact of consumer on consumers, and the scope of intervention by authorities in the context of AI consumer protection, there is one more thing that we have been exploring as consumer protection agency, which is the use of AI to our own purposes in investigating unfair practices, so while we can see and monitor the use of AI by companies, it is also a great tool for us to increase the efficiency and effectiveness of our own actions, and our own activities, so we are also doing this, we are conducting two projects where we develop AI tools, and we are also aware that there is many other such projects all over the globe, our colleagues, our panelists will tell you more about that, so Piotr, that would be all from my side, and I wish you a great panel, I’m pretty sure you’ll be able now to present the panelists, thanks very much.

Piotr Adamczewski:
Thank you Martina, I totally agree that we have to discuss the problem of using AI, I have to also admit that last week we had a panel among the other consumer protection agencies on ICEPAN conference, when we are gathering together with the institutions which have the same aim, namely protection of consumers in each jurisdiction, and then we focus on what we have in our pockets, in our desks, what kind of tools we are using, and we concentrated more on the risks which are connecting to using of AI, and today I think that the panel on the Internet Governance Forum, as Martina mentioned, we are the third time already in this summit, is the better place to discuss the possibilities, the future, how we can develop further. I strongly believe that the artificial intelligence will be used by many agencies, it’s already actually in usage, it’s already in operation by many agencies, but it will be developing pretty fast, and definitely it is needed for the detection of the traditional violations, but also for the infringements which are new, which are connected to the to the new world of the digital services. So today for that reason, to that aim, we invited our prominent guests, Professor Christine Rifa from University of Reading, who made a thorough survey on the usage of AI by the consumer protection agencies, representatives of international organizations, which is OECD, which deal with the shaping of the consumer policy worldwide, with Melanie McNeil on board with us, and the representative of the DG Just, Angelo Grieco, and other people from the enforcement authorities from ACCC, Sally Foskett, and myself as well. And last but not least, we have Kevin from Tony Blair Institute for Global Change to talk with us from the perspective of consultancy world. So the structure of the panel would look like two rounds, so first we will present the tools we already have, and then in the second round we will ask our guests about the future, about the possible developments. So first I would like to turn to Christine and ask her about the outcomes of her survey. Christine, the floor is yours. Great, thank you so much. I’m trying to quickly

Christine Riefa:
share my slides to help with following up what I’m trying to describe. I think you should all see them now. So thank you very much for having me, and it’s a pleasure to join you only virtually, but still had this very amazing conference. I will give you a tiny little bit of background before, because I’m aware that perhaps some people joining this panel are not consumer specialists. So consumer protection really is a world with several ways of ensuring that the rights of consumers are actually respected and enforced. It’s a fairly fast developing area of law, but it has a fairly unequal spread and level of maturity across the world, and that does cause some problem in the enforcement of consumer rights. We also rely in most countries of the world that have consumer law on the spread of private and public enforcement, and AI as the subject of today can actually assist on both sides of the enforcement conundrum. We also have a number of consumer associations and other representative organizations that can assist consumers with their rights, but as well can assist public enforcement and agencies in the UK. A very good example is that which the consumer association is actually able to ask the regulator and the enforcers to take some actions. So that’s variable across the world what they can do, but they normally are a very important element of the equation as well. We’ve seen in previous years pretty much around the world a shrinking of court access for consumers as well, and an increase in ADR and ODR, as well as realization I think that public enforcement through agencies is really an important aspect of the mix on how to protect consumers. Hence the session today is obviously extremely important to ensuring we can further the rights of consumers and develop our markets in a healthy way. So the project I’ve been involved with is called MFTEC, stands for enforcement technology, and it really looked at the tools for the here and now that enforcement agencies were using in their daily work, and it also reflected a little bit about the future. I’ll keep those comments for the second round. What we found is that MFTEC, which is actually a broader use of technology than just AI, so it would include anything that is perhaps a lower tech, if you wish, than artificial intelligence might be, but can be just as effective. And we wanted to look at ways agencies could ensure markets worked optimally, and also realize that not using technology in the enforcement mix might lead to a potential obsolescence of consumer protection agencies, and there was therefore an essential need to respond to technological changes. We surveyed about 40 different practices that we came across, not simply in consumer protection, but in more supervisory agencies as well, and we ended up selecting 23 examples of MFTEC that are specific to consumer protection, spanning a range of authorities, 14, seven of them were general consumer protection agencies, spanning five continents, and four generations of technologies. It is only a snapshot, it’s obviously extremely difficult at this stage to work on public information about use of technology in agencies. There’s also an element of development, and there are also reasons why agencies may not want to very publicly announce that they’re using particular tools. The survey, however, has got some really interesting findings. We, in the report, explain how a technological approach will be essential, and how to start rolling one out. We give a picture of how agencies that are doing it, are doing it, and have instructed themselves in order to enable themselves to rely, to roll out MFTEC tools. We also mapped out the generations of technologies, because actually not all agencies will start from the same starting point. Some agencies might be very new, have absolutely no data to feed into AI, others might be more established, but not have structured data in the way that might be useful. We also found that with very little technology, you can actually do a lot in consumer enforcement, and therefore our report recognizes this. We provide a list of use cases, so for anyone interested in what’s happening on the ground, then that’s a very good starting point to find out pretty much all the examples of things that are currently working. We also reflected on some practices that we find slightly outside of the remit of consumer protection, but that could be easily rolled into consumer protection. Of course, we discuss challenges. Our key findings, and I think they are quite useful for the purpose of today’s discussion, where we’re going to hear loads of different examples, is that actually AI obviously is a misnomer. We’re talking to a very erudite audience here, no need to dwell on this, but in consumer protection at the moment, AI is really not the panacea, and we think that even in the future, it will not solve all the problems. It has, however, got huge potential, and we found that about 40 to 45% of the consumer authorities we surveyed are using AI tools. Now, that still means that there are 60% of other tools that are still MFDEC tools that are being used, and they are not AI. That’s quite a significant finding because just in 2020, at the start of discussions about technology and consumer enforcement, very few reports or projects actually considered AI as being viable. They were looking at other technical solutions. What we found as well is that the agencies that have got a dual remit, so that are not just dealing with consumer protection, have fared a little bit better in their rollout of tools, and that might be because they are able to capitalise on experience in competition law, for example, but also because they may have bigger structure, and that obviously facilitates a lot of the rollout of technology. If we compare consumer law enforcement to other disciplines, we find that we are behind the curve, but as Piotr mentioned, are catching up very quickly. I’ll move on all of this. The final thing for me to point out at this stage before we hear from the example is really that AI as a solution in consumer enforcement needs to be built in with a framework and a strategy that will take into account all the potential problems that might come with it. One of the big dangers that we have identified is that if there is a lot of staffing, resources, money going into developing AI as a solution for consumer protection enforcement, then it would be really a shame to fall at one big hurdle that will come the way of the enforcement agency, and that is a legal challenge from the companies being investigated. We found loads of potential issues and things to strategise about, but the legal challenges that might come from the use of AI in consumer enforcement is one that has been clearly understudied and we didn’t find very much on, so that’s on that general overview that I leave you and pass on the floor to the next panellist. Thank you, Christine. It’s still a lot of work,

Piotr Adamczewski:
but it looks promising, definitely. Now, I would like to give the floor to Melanie and to see how OECD is seeing the opportunity for consumer protection regarding the usage of AI.

Melanie MacNeil:
Hi, everyone. Good morning, good afternoon, depending on where you are. If you just bear with me for one moment, I will share my screen very quickly. All right, so I’m assuming everyone can see that. I’m very excited to be here today, and the previous presentation was very helpful as well in setting this up. So I’m speaking to you today from the Organisation for Economic Co-operation and Development or the OECD, where I work in the consumer policy team. So the OECD has 38 member countries, and we aim to create better policies for better lives through a lot of best practice work and working with our members to see what they’re doing to address particular issues. So today I’m really excited to talk to you about artificial intelligence and how it can help empower consumers, and how it can be of great assistance to consumer law regulators as well. So I’ll also be sharing some information with you about the OECD’s work in the AI space more generally. So we’ve just touched on it, but the first thing I’ll talk to you about is using artificial intelligence to detect and deter consumer problems online. As a previous consumer law investigator, this is a topic very close to my heart, we’re seeing a lot of AI being used by consumer law regulators as a tool to increase efficiency in finding and addressing potential breaches of consumer law. It’s particularly useful in investigations, where work that was previously manual and quite slow, like document review, can now be completed a lot more quickly. There is still and always will be a significant and essential role for investigators, but AI tools can support the preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. Robust investigative principles are always needed with any investigation, and the addition of AI to our toolkits doesn’t change that. But I thought it would be helpful to give you some practical examples of some great tools that we’ve seen our members using. So the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyse consumer contracts looking for unfair contract terms. So the technology searches over the fine print of terms and conditions of things like subscription contracts to ensure that there’s no unfair causes, such as inability to cancel a contract. So this work, previously in most member countries, was undertaken manually with groups of investigators reading hundreds of clauses in hundreds of contracts searching for potentially unfair terms. But the AI tool really adds some efficiency to this, and regulators can then take enforcement or other action to have the terms removed from the consumer contract, preventing consumers from being caught in subscription traps. So that’s an example of a tool that really frees up a lot of investigator hours for other things and enables investigators to really focus on the key parts of investigations that do need human decision making and strategic thinking. So another issue faced by consumers online is that of fake reviews. You’ve probably all seen one at some point. Reviews can play a huge part in our purchasing decisions, but to give you an example, last year, Amazon reported 23,000 different social media groups with millions of followers that existed purely to facilitate fake reviews. This is obviously too much for individual consumers to deal with and for regulators, but machine learning models can analyze data points and help to detect fraudulent behavior. Fake reviews are often classed as a form of misleading or deceptive conduct under consumer law, and while regulators are using AI to detect fake reviews, private companies are also investing in this space as well. So this is a good example of how businesses and regulators are working together to enable consumers to make better choices. The OECD, we’re quite excited about some work that we’re hoping to do with Icepen in the near future with member countries looking at the use of artificial intelligence to detect and deter consumer problems online that was referred to earlier. There’s some really great efficiencies to be found, which ultimately mean that regulators can detect and deter more instances of consumer issues. So the increased efficiency can deter businesses from engaging in this conduct. And similarly to criminal behavior, if people know they’re more likely to be caught, they’re less likely to engage in the conduct. So we’re very excited about the future work with organizations like Icepen to share some of this best practice so that other regulators can benefit as well. So another space that we’re seeing some great work from our members is the impact of AI on consumer product safety. So AI is being used to detect and address product safety issues by regulators too. So for example, Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall. So where something has been deemed unsafe and withdrawn from sale, there are cases where nevertheless businesses continue to sell those items. So Careers Consumer Injury Surveillance System uses AI to search online for text and images to detect cases where those products might still be being sold. Using AI in this context can mean that the unsafe products are found faster, so regulators can take action more quickly and consumer injuries are ultimately reduced. So as well as detecting issues like that, Careers also using AI to assist consumers who might be looking for information or wanting to report an unsafe product. So Careers has an excellent chat bot that they use on their website that consumers can use to report injuries for products. So that if they’re harmed by a product, they can report it to the authorities. The chat bot makes it very simple for them to lodge the information rather than asking them to fill out a detailed form. It’s more efficient. And then they use coding of the information provided by the consumers with machine learning to enable more efficient analysis of the reporting. So when it’s easy to report an issue, consumers are more likely to do it and better data enables regulators to better understand the issues and to address them as well. Similarly, AI technology and software in particular with products can enable product safety issues to be diagnosed early. So some of the more advanced home appliances, for example, that have software built into them that you might be able to control from your phone, they’re very useful as well in terms of alerting consumers to potential product safety issues. They can be notified that a device might need servicing, that repairs are needed, or that a remote software update might be required. So there’s already been instances with smart devices such as smoke alarms that have been remotely repaired and a product safety issue addressed through a software update. This type of technology in that circumstance can potentially be lifesaving. So the increasing prevalence of AI in consumer goods can bring benefits and the gaming industry has always been pretty quick on the uptake with technology. We’re investing a lot in AI to change the way that people experience games, but as the use of digital tech intensifies, the way that people communicate and behave online is also changing. So this is an issue where there are new and emerging risks and they’re not particularly well understood in all spaces, particularly in the context of mental health. So one of the major projects that we’ll be undertaking at the OECD shortly is looking at the impact of consumer health and safety, sorry, the impact on consumer health and safety of digital technologies in consumer products. It’ll be focusing on AI-connected products and immersive reality and the impact on consumers’ health, including mental health. So the project aims to identify current gaps in market surveillance and the way that regulators might monitor for product safety issues and to identify future actions to better equip authorities to deal with some of the new risks that are posed by AI and the new technology relating to consumer products. We’re aiming to provide practical guidance for industry and regulatory authorities to better understand and address product safety risks. And we’re going to have a real focus on consideration of those risks in safety by design. So that’s a new project to keep an eye out for. Another space that we have seen AI provide some great benefits in empowering consumers is in the digital and green transition. So many consumers want to make greener choices, but sometimes they don’t due to information overload or a lack of trust in labelling or other behavioural science issues that can affect all of us. So research has shown that nudges or design interventions can encourage consumers to make greener choices and can encourage people to behave in a specific direction and overcome some of those behavioural issues that might otherwise prevent them from making a green choice. So AI provides an excellent opportunity to nudge consumers towards greener choices. So, for example, in Germany, like in many countries, heating bills are often not prepared in an understandable way and they’re inconsistent between providers. Each metering service can use different formats, different terminology. And as a result, consumers find it really difficult to compare which company to choose. They find it really hard to pick up errors in their bills. They end up paying more for energy and services and incentives to save energy are difficult to identify. So this can cost consumers a lot of money, but it also causes a lot of unnecessary emissions because it’s so difficult for people to make a greener choice that they essentially give up. I think it’s something that we’ve probably all been guilty of at some point when you look at various contracts for services. So to help consumers manage their energy consumption, the German government has funded a digital tool which uses AI. The household can upload their energy bill and it’s evaluated using AI to provide a series of facts about how they can reduce their energy consumption and save on heating bills. So the tool is an example of a nudge that can help a consumer to make a better energy choice and help them to overcome the barrier of it being too complicated to make that choice. Similarly, consumers experience information overload with a lot of the green labels and badges and schemes that you might see on items in the supermarket. And the other issue is that it can be difficult to compare these and consumers have no way to verify what’s actually happening in a company where they put a green marking on their packaging. So, for example, last year in Australia, they did an online sweep and found that 57% of the claims made in a sample were misleading when it came to their green credentials. So there are some parts of the world that’s using regulation to really strictly control the way that such markings and accreditation schemes can be used. But where that’s not occurring or to substitute that, AI can also be used to assist consumers to make the green choice by helping to break through the unmanageable amount of information that’s out there. So we’re seeing new apps being developed to enable shoppers to scan a barcode of an item in a supermarket and see its sustainability or ethical rating compared to other products. Where a product scores poorly, the app can suggest an alternative. These are quite limited at the moment, but we’re expecting that in the future, AI will be used to expand the list of products that are considered and to recommend products that align more with users’ environmental preferences. So the OECD is currently undertaking a project looking at fostering consumer engagement in the green transition and addressing some of these barriers to sustainable consumption and looking at the opportunities that digital technologies use to promote greener consumption patterns. So this project is also going to involve empirical work to better understand consumer behaviours and attitudes towards green consumption. Just taking through as well a couple of the tools that have been developed by the OECD that can be quite relevant. So one of the things that we’re working on at the moment is the OECD AI Incident Monitor. There’s been a big increase in reporting of AI risks and incidents in 2023 in particular, the rise has just been astronomical. So the OECD AI Expert Group is looking at this and they’re using natural language processing to develop the AI Incident Monitor. So the monitor aims to develop a global and common framework for reporting of AI incidents that could be compatible with current and future regulation. So one of the issues that regulators face in addressing almost any problem is consistency of terminology and understanding. So part of this project is looking at developing a global common framework to understand those things. And then the AI Incident Monitor tracks AI incidents globally and in real time. So it’s designed to build an evidence base to inform incident definition and reporting, and particularly to assist regulators with developing AI risk assessments, doing foresight work and making regulatory choices. So the Incident Monitor collected hundreds of news articles manually, which was then used to illustrate trends and to help train the automated system. And you can see on that slide there where the where the project is up to. They’re using natural language processing with that model. And now they’re getting into the space of categorising the incidents, looking at affected industry and stakeholders. And it’s also going to be quite useful, the product safety project that we’re doing, looking at potential health and mental health risks from AI and new technology. We’ll also be looking at including a product safety angle to the incident monitoring tool as well for AI. So I realise that’s been fairly quick, but they’re the projects that we’re doing at the moment and the work that our members are doing, looking at AI to assist regulators. And there’s also the OECD AI Policy Observatory that I just wanted to share with everyone, which aims for policies, data and analysis for trustworthy artificial intelligence. The Policy Observatory combines resources from across the OECD and its partners from a large range of stakeholder groups. It facilitates dialogue and provides multidisciplinary evidence based policy analysis and data on AI’s areas of impact. So the OECD AI Policy Observatory website is very large. It’s a lot of really helpful information on there. We’ve got articles from stakeholders as well as reports from the OECD. So chances are, if you’re working in the AI space, you will find useful information there. I’ve also just included a link to the consumer policy page. And then we’ve also got the OECD AI principles to promote use of AI that’s innovative, trustworthy, respects human rights and democratic values. So there’s a snippet of the information there. But we are setting up policies that we think will assist members for AI more generally, as well as in specific spaces like empowering consumers that we’ve been talking about today. So that’s all from me. Thanks for the opportunity to have a chat with you all about our work.

Piotr Adamczewski:
Thank you, Melanie. As a current enforcer, I totally share this idea that it’s about efficiency, it’s about enhancing us. But yet at the first stage of the investigation, where we are working more on detection of the violations, but later on, definitely we need to preserve all the rights to defend by the traders. So it’s helping us a lot, but especially in the first phase of our work. So now I would like to turn to Angelo and check what are the newest tools in the possession of the European Commission with the eLab established in DigiJust. Angelo, the floor is yours.

Angelo Grieco:
Thank you very much. I’m just trying to… I don’t know whether you see my screen, but I’ll try. Can you see it? Good afternoon to all of you. Thank you for… I would like to thank you. Bob Piotr, you know, and your Polish colleagues for moderating this panel and inviting us as European Commission to join. We are very honoured, although we couldn’t join physically, so I will have to do this remotely. I’m the Deputy Head of the unit, the group in the Commission which is responsible for enforcement of consumer legislation, and in this team we do two main things. We coordinate enforcement activities of the member states in cases union-wide relevance, and we build capacity tools the national authorities can use to cooperate and investigate, including and especially, I would say, on digital markets. Now, I will, in this presentation, I will get a little bit more into the specifics of those tools, although there’s little time allowed, so I will try to go through them quite rapidly. And as you can see from the slide, you know, I will focus on three main strands of work that we are following. So the first two concern the use of AI-powered tools to investigate breaches of consumer legislation, and the first is our Internet Investigation Laboratory. Then the second is behavioural experiments that we use to test the impact of market practices on consumers. And then the third, as third last element, I will talk about a number of enforcement challenges relating to marketplaces which offer AI and platforms which offer AI services. So if we look at the eLab, the Internet Investigation Laboratory, called the eLab, is an IT service powered by artificial intelligence that the Commission has put at the disposal and exclusive use of EU national authorities of the Consumer Protection Cooperation Network that we coordinate as Commission. So the need for such a tool obviously has been said by speakers here already, comes from the inability of enforcement agencies to face enforcement challenges on digital markets, in particular monitoring with just human intervention. In a nutshell, too much to monitor with little resources and increased need to have rapid investigations which cover larger portions of market sectors. So this tool is a virtual environment which we launched in 2022 and which can be accessed remotely from anywhere in the EU, which literally means that investigators can use this tool from their own IT facilities, sitting in their offices in the Member States. And it can be used for a number of investigation activities, especially to conduct large-scale reviews of companies and practices, such as a mix of web crawlers, AI-powered tools, algorithms and analytics that run to conduct those investigations, so that they can analyze really vast amounts of data on the internet to identify indicators of specific environments. And the parameters can be set to be investigation specific, so that AI-powered algorithms can look for different type of elements and different indicators of breaches, and I will give a quick example of that later. The E-Labs offer various tools and functionalities, and the… so we have… let me just turn the slide… so we have VPN, so that investigators can use hidden identity, we have specific software that allows to collect software and evidence as you go while you’re investigating and transferring to your own network, including time certification where that evidence was collected. Then there are comprehensive analytic tools to find out information about internet domains and companies, so these are open sources tools, so they can search and combine different type of sources of information across different databases and geographical areas. And they are very useful, for example, to find out who is behind a website or a webshop, but also to flag cyber security threats and risk also indicators of how the likelihood that the website is a scam, you know, or is run by a fraudster. Now, if we look at two examples of how we use these tools and things now, the first one is Black Friday, is the price reduction tool which we used in the Black Friday sweep we did last year, where we tested… basically we used the tool to verify whether discounts presented by online retailers on Black Friday were genuine, and the result was that discounts were misleading for almost 2,000 products and 43% of the website that we followed, and to understand whether, of course, when discounts were genuine, we had to monitor 16,000 products at least for a month preceding Black Friday sales. Then another example is the, we call it FRED, the fake reviews detector, so this is something that we use, so the machine in this case scrapes and analyzes text detecting to try to detect whether a review first is human or computer generated, and then beyond that, you know, when even in case of human-generated reviews, based on the type of language and terminology used, indicates a likelihood score for whether the review is genuine or it’s fake. It’s sponsored, for instance, you know, and the machine showed 85 to 93% accuracy in this case, so this is just to give you two examples of this. Then the other strand of activity that we are running at the moment is, and we literally inaugurated this in the past month, is the use of behavioral experiments to test the impact of commercial practices on consumers, and this both to, we do this in the context of coordinated enforcement action of the CPC network that we coordinate against major business players to test whether the commitments proposed by these companies to remedy specific problems are actually going to solve the problem. So, and we also test, use these behavioral studies to test what is the, in general, what is the impact of specific commercial practices which could potentially constitute dark patterns, and this to prepare the grounds for investigations or other type of measures. So, the first, I would say, strand of work in this area we use, for example, to test the labeling of commercial content in the videos broadcasted by a very well-known platform, so whether the indication, you know, and sort of the qualification of commercial context is good enough, is prominent enough for consumers to understand it, and that’s very important, I would say, in the type of platform tools that we are confronted every day on the internet. And the second one, so we tested, for example, to see what is the impact of cookies and choices related to targeted advertising. Okay, what is interesting in these experiments is that they are calibrated based on the needs of each specific case, and we use large sample groups to produce credible, reliable, scientific results, so higher chance to identify significant statistical differences, and we use also AI-powered tools to do this, including analytics, but also eye-tracking technology connected to analytics, and that we did, for example, to test the impact of advertising on children and minors, you know, and we tested them in lab. Now, the last thing I wanted to address here rapidly, it’s an area which is drawing a lot of attention, which is mentioned also by previous speakers, at enforcement level, not only in the EU, but also in other jurisdictions, and it’s the offering of AI-based services to consumers, such as AI-powered language models, recently developed or recently becoming popular, and these models can generate, you know, we all know these models, not by now, but they can generate human-text, human-like text responses to a given prompt. Such responses continue to improve based, you know, on massive amount of text data from the internet, what is called reinforced layer learning from human feedback, and they are not offered only as standalone, but they have been integrated in other services offered, like platforms, like search engines, and marketplaces. While these practices have been investigated in the EU and other jurisdictions, I cannot say, and I cannot say much about this ongoing investigation. The attention, I can, however, flag a few elements where the attention of the stakeholders at the moment is focusing, so what are the issues, what are the problems, and, you know, we see that one main area of problem is transparency of the business model, so what are really the characteristics, what is really offered, what is really the service, how is this remunerated, how is this financed, this business model is financed, what are the difference in between the free version, so-called free version, and the paid-for version, and how does this relate for the use of commercial, of use of data, personal data, also the consumers for commercial purposes, like, for example, to send targeted advertising. Now, so there’s this part, and then, you know, of course, you know, we are very focused at the moment on the risks, you know, of those models, so we have seen that often, you know, there is manipulative or misleading content, there are biases, errors, you know, and one big concern is whether these platforms can do an adequate mitigation of those risks, and then you have the problem of the harm of specific categories of consumers, which are weaker, let’s think about minors, but not only, and associated with that, of course, is the mental health and possible addiction also, which has been experienced already, so the difficulties here is that, on the one hand, from a very, very general standpoint, we have a new, I would say, way, you know, of applying consumer legislation, and we need new reference points, you know, to apply consumerization to these business models, where, you know, the technological part is really still a little bit obscure, you know, so there’s a technological and scientific gap between enforcement and, you know, those companies who run these platforms, then the fact that these elements are integrated in other business models often, and then that we are at a crossroad here between protection of the economic interest of consumers, data protection, so data privacy, and the protection of health and safety, so this adds quite a bit of complexity to the work of the enforcers, who are nevertheless, you know, looking into the matter, enforcement may not be enough, and as we know, there may need to be sort of complemented by regulatory also intervention, and we will see about that. That’s all for me at this stage. Thank you.

Piotr Adamczewski:
Thank you, Angelo. I have to admit that it’s a really fascinating idea that there will be this possibility to share with the European Commission the software they are preparing, so we have like this possibility to create our own department with a lot of people, very costly to manage for each single consumer protection agency. We can work also on projects like we did in past, and we are still engaged in that kind of developing of software, but of course, the idea of just addressing the Commission and using the already prepared software is great, so now it’s my turn to give some insights about what we actually made in past, and on what we are working right now, so I will talk a little bit about ARBUS, the system which we made for the detection of unfair clauses, but I will focus on the main aspects, not to prolong too much time, we need to speed up a little bit, and then I will share with you some ideas about the ongoing project on dark patterns and on preparing of white paper for the enforcers. So, going back to 2020, when we actually figured out that we can use artificial intelligence for the enforcement actions, it was not so obvious at that time, I mean, it is the time before charge GPT, and it was not so clear that the natural language processing can really make such amazing things, but we thought that we have to try with this possibility, we focused mostly on our efficiency and we checked three factors for which direction we should go, so first of all, we considered the databases which were in our possession, and then we also defined strictly our need, so what is actually necessary for us to get more efficiency in which field, and finally, we also have in our view the perspective of the public interest and to always have it in mind what is actually necessary for public opinion to speed up with our work, and as the result of that was this project on unfair clauses, because we had a huge database for that, almost 10,000 entries already established unfair clauses, so we could use them for preparation of a proper database to fuel, to learn the machine how to detect it properly. Secondly, it was our need because it’s very time-consuming, it’s quite easy task for the employees, but still it’s hugely time-consuming to read all the standard contract terms and to understand them and to indicate which provisions could be treated as unfair, and finally, this is really huge public interest because we have to take care of all the standard contracts and we try to eliminate as much as possible of unfairness from the contracts, and especially with a fast-growing e-commerce market, it means that we have to adjust our enforcement actions and work closely with the sector. There’s no other options like automatization of our actions for doing that. What about the challenges in the project? First of all, database, so like I said, we had a huge material for that, but still we had to use a lot of human work to structurize it, it’s not so easy, you need to put it in the special format, you need to choose one, and then prepare that in a special way to make computer to understand it. Then the second problem which we faced at that time was the choosing of the vendor, so we were not able to hire like 50 experts in data scientists, so we decided to work with outsourcing and choosing a proper vendor was very challenging for us. We used a special type of public tendering which was preparing of POC first and then letting the information to the market, showing how it could be solved, and at the same time asking the market for preparing the other POC which we could compare in a very objective manner. And only because of the result of this contest, we decided on the producer of the tool. And finally, the implementation of the software into our organization. So again, it’s very challenging for the traditional organizations, traditional institutions to empower them with the new tools and to help people who already established some kind of work with the specific problem to make it differently, to make it in future more efficiently, but still at some point people need to find the good reason for accepting the change. So taking into consideration all the challenges, I have to say that we are already fully operating the system and we have the first good results, but still it’s the detection. So it’s flagging. So definitely it’s helping us in the first phase of the investigation, but later on after flagging of the provision, we have to do a proper investigation. That’s what we cannot change right now. A few words about our current project, Dark Patterns. So this is again the problem of detection of violations, which are quite widespread right now. There are some studies which showed that a lot of e-commerce companies are involved in the Dark Patterns, which means generally that there are some kind of deceit factors in their interfaces. And we try to prepare the tool which will allow us to work much more faster. So not going from one website to another, not looking for the violations, but be much more proactive and not just to wait on the signals from the harmed consumers, but to be able to proactively discover the violations. And here there are another problem because we have to create the database. We don’t have already existed database like in the first project. And so now we are working on the ideas, how we can do that, having in mind the possibility of verification of the construction of websites. And also the database could be constituted on the outcomes of the neuromarketing researchers, which we are going to carry out. All of that shall allows us to build some specific group of factors, which can allow to figure out what is deceiving, what is not deceiving, and to fuel the machine for the proper action in that manner. And last but not least, we are also working on preparation of the white paper for the agencies on the same status which we have. So it’s our second project. So we already have some problems and we were able to solve that. And we have some ideas about the transparency and about the way how we can safely introduce, deploy the software into the work on the enforcers. So we would like to share all that ideas with the colleagues from other jurisdictions and we’d like to make it public next year. So going further, we also know that the Australian competition and consumer commission is working right now on different projects. And Sally, if you hear us, could you share with us the more insights about what is going on right now in ACCC?

Sally Foskett:
Okay, thank you. I’ll just share my slides. I’m not used to using the Zoom, I’m afraid. So is someone able to maybe talk me through how to share my screen? Sorry. I think that there is a problem with the connection. I think that there is the share button at the bottom. Oh yes, thank you. Okay, I will present like this. I’m sorry, hopefully that is readable to everyone. Great. Okay, look, thank you so much for having me attend. I’m really excited to be here. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. Was that okay? Okay, so the first of our JV lecturns is from Stephanie in Kenya. Colon, I’m so excited to have you all here with us today. So please welcome Stephanie. Thank you for allowing me to attend today. Thanks to IGF for hosting this meeting and to all of you for joining us today. So we’re going to be looking at a few different angles. First, using AI to detect consumer protection issues. Second, understanding AI in consumer protection cases. And third, perhaps a little more tenuously enabling the development of consumer-centric AI. So first, using AI to detect consumer protection issues. So we have a number of projects on foot that are looking at methods of proactive detection. And these broadly fall into two categories. The first category is streamlined web form processing. So every year we receive hundreds of thousands of complaints from consumers about issues they’ve encountered when buying products and services. Many of these complaints are submitted through the HPC’s website which is a large field in which users type out the narrative of what has occurred. The issue with this approach is that our analysis of the form can be quite manual. So we’ve been experimenting with using AI to streamline this processing. The techniques that we’ve been experimenting with include entity extraction. So using natural language processing to identify parts of speech that refer to particular products like phone or car or kettle, hot water bottle for instance. And also companies as well who use entity extraction for. Another technique that we’ve experimented with is classification. That is using supervised learning to classify complaints according to the industry that they relate to. Agriculture, energy, health, et cetera. Or the type of issue that they relate to. So that’s the type of consumer protection issue. And then we’ve also been more recently experimenting with predictive analysis to determine how relevant a complaint is likely to be to one of the agency’s enforcement and compliance priorities. I have listed on the slide some examples of our priorities from this year which include environmental and sustainability claims that might be inaccurate. Also consumer issues in global domestic supply chains and product safety issues impacting infants and young children. Now the outputs of these models are not yet at a level of reliability that we would be comfortable with before deploying them into production. But it is something that we are actively working on and shows a lot of promise. The second category is not about analyzing data that we already have. It’s about collecting and analyzing new sources of information. And we’ve heard a lot of examples of this today. So scraping retail sites to identify so-called duck patterns. As others have pointed out, duck patterns or manipulative design practices are design choices that lead consumers to making purchasing decisions they might not otherwise have done. And sometimes these choices are so manipulative that we consider them to be misleading in the breach of the consumer law. And examples include was now pricing and scarcity claims that are untrue. We’ve also looked at subscription traps and to a lesser extent, fake reviews as well. The techniques that we use in this space are quite simple actually. So if a claim like only one left in stock is hard coded into the HTML behind the page, we know we have a problem. So a lot of this analysis is actually based on regular expressions. So basically looking for streams of text. But we do have an AI component that we use to navigate retail sites as part of the scrapes and to identify which pages are likely to be hacked pages. Turning to the second lens that looking at this question of empowering consumers with AI I thought it might be useful to touch on some of our cases where we have obtained and analysed algorithms that we used by suppliers in their interactions with consumers. So this is a really important thing to be able to do from an enforcement perspective as algorithms are increasingly and here I’m slipping into using algorithms instead of AI as Christine mentioned, AI is a bit of a misnomer. But as algorithms are increasingly used to implement decisions across the economy, regulators must be able to understand and explain what they’re doing. So we’ve had a few cases and market inquiries where we’ve needed to do this and I thought I’d explain a little bit more about what our approach is. And I’m going to speed up as well given the time. So when we need to understand how an algorithm operates, we’ll typically look at three types of information that we obtain using our statutory information gathering powers. So the first type of information is source code. That is the code that describes the rules that process the input into the output. And we’ve had a few cases where we have obtained source code from firms and worked through it line by line to determine how it operates. It’s a very labour intensive process, but it’s proven valuable, not critical for a few of our cases. The second type of information we obtain sometimes in algorithm cases is input output data, which is useful because it tells us how the algorithm operated in practice in relation to actual consumers. It helps us establish not just whether conduct occurred, but also what the harm was. So how many consumers were affected and to what extent. And then finally, the third type of information we obtain is business documentation. So emails and reports, et cetera. And this is useful because it tells us what the firm was trying to achieve. Often when firms tweak their algorithms, they’ll run experiments on consumers, on their customer base, so-called A-B testing. And so obtaining documentation about those experiments can shed light on what was intended to be achieved. The last point I’ll make on this slide, and mentioned earlier, a few of many other regulators are doing this as well, is we use predictive coding for document review. So we use machine learning to help expedite the review of documentation that we obtain from firms in our investigations. And very lastly, I thought I would briefly touch on a topic that’s a little more future focused, which is the possible emergence of consumer-centric AI. So this is more about empowering consumers in the marketplace, as opposed to empowering consumer protection regulators. The ACCC has a role in implementing the consumer data right, which is an economy-wide reform in Australia that gives consumers more control over their data. It enables them to access and share their data with accredited third parties to identify offers that might suit their needs. Currently, the Australian government is consulting publicly on draft legislation to expand the functionality of the consumer data right to include what’s called action initiation. So that will enable accredited parties to handle not just data, but also actions on behalf of consumers with their consent. So even though this is very early days, perhaps in the future, as a result of initiatives like action initiation in the data right, we might see the emergence of more consumer-centric AI. So AI that helps consumers navigate information symmetries and to bypass manipulative design practices to access products and services that are most suited to their needs. And I will stop there, thank you.

Piotr Adamczewski:
Thank you very much, Sally. So it looks like a lot is happening actually in this sphere, but still there is the report made by Tony Blair Institute, which indicates there should be some reorganization and some new planning for the technological change, especially in UK. So Kevin, if you could give us some recommendations about the report.

Kevin Luca Zandermann:
Yes, thank you. Thank you, Paul. Thank you everyone for sort of sticking in at this hour, especially here. So our work in this space is fundamentally, it’s joining two parts. Like the first one is our work on AI for proactive public services. So we do believe that AI has an enormous potential to transform the way we deliver public services. And the big picture is, of course, concerns areas such as personalized healthcare, personalized education. So in many ways, create a new paradigm that is tech enabled, but also institutional to provide a new way to think and then actually offer public services. So that’s the first component. And the second component is the work that in our unit, we’ve carried out in consumer protection. We did commission last year, an important report to a consumer protection expert, this call that Christine knows very well, where we actually looked at potentially at consumer protection regulation as a framework for internet regulation. So these are the two main components for this panel that I’ve tried to join. So in terms of, I thought it would have been useful to offer an overview about the baseline scenario as someone, considering I’m not a regulator. So it’s useful to assess the way we’re at now. And it seems clear that the main challenges for most regulators around the globe are the fact that their resources are very limited and outdated rules contribute to a law enforcement culture and therefore legitimization of illegitimate practices. The fact that there is an even international capacity, which has been reiterated by many other panelists and low, very low cross-border enforcement coordination. And finally, the fact that action is reactive and slow, rather than proactive as firms entrench power. And on the sort of disruptive incumbent side, I think the most important one is the fact that, incumbents can become so dominant that they offer a very selective interpretation of consumer rights, for example, prioritizing like customer service excellence, for instance, over like add the forms of safeguards. Martina, if you could move to the next slide. Okay. Okay, I can continue. So what we looked at at the institute is then like the very important review that the Stanford Center for Digital Informatics has carried out. It’s a very comprehensive survey, in terms of coverage, it almost reaches the level of coverage that the OECD would have in this very comprehensive global surveys. the this comprehensive like review like deals with the adoption of computational antitrust by agencies throughout the globe and 26 countries responded to this survey and Out of this survey I selected two examples that I think are quite telling about How consumer protection authorities are embracing AI? The first one is Finland. So Finland has carried out the Finnish Consumer and competition authority, I think has carried out quite an interesting exercise using AI Basically using AI as part of their cartel screening process and there instead of instead of sort of looking at their past data To build tools for the future. They actually started with a sort of exposed and reflexive Testing of AI so they looked at previous cases And sort of simulated a lot of scenarios. So they looked at previous cases in particular Some that dealt with two substantial Nordic cartels which operated in the asphalt Paving market in in Finland Sweden and they essentially compared The you know, the basic scenario which was the real one that happened where they did not have AI and they compare that with the benchmark against The Scenario where they actually could have used AI and assessed the two different performances and they did It did appear quite clearly that Utilizing a mix of supervised machine learning and separate and separate distributional regression test They could have found out about thank you. Thank you, Martina. They could have found out about those cartels in a much quicker way and Therefore this enables, this has enabled them to basically build new ex-officio Cartel investigation tools. So this could constitute a very important deterrent for for example Companies that create cartels because you have effectively a competition authority that has yet quite a quite an effective tool Quite an effective ex-officio tool to detect these patterns and then the other one is there’s probably a little bit less sophisticated But again, Christine would know Like would know about this very well Actually in the UK there is no requirement for parties to a merger to notify the competition markets authority Which is the relevant authority in the UK of a transaction so it used to be that the CMA had to Sort of very manually monitor new sources to identify these mergers. So a tremendous waste of time and Especially for a regulator that is already like very stretched in terms of resources both financially and in terms of time So the unit has developed recently a tool that actually track tracks mergers activities in an automatic way using using ML, you know A series a series of techniques. They’re very very similar to the ones that the other panelists have described so I’m not going to go too much into detail, but it just it just a textbook example of what You know, in many ways the low-hanging fruit of AI as used by consumer protection authorities, particularly in Legislations such as the UK where the notifier requirements may are less a less sort of onerous than maybe in other legislations such as for example in the in the EU and Then I thought that I would have been nice to conclude Martina again, if you could move to the next I would be grateful with a series of policy questions That Angela has sort of touched upon Previously And these questions are about I think the ethics of the algorithm and in particular If you think about the Finnish model the fact that AI is very good at detecting patterns But we know from for example the application of AI in health care that it’s not necessarily as good at Detecting causality so it can be quite dangerous to to start from a from an AI detected pattern and enjoy like quite and draw our conclusions without Without human oversight in the case of the Finnish in the in because of the Finnish Authority They were very much aware of it and in fact they as part of that as part of their Assessment they have a second stage where if let’s say the I tool this was the sort of supervised learning Tells them that there is like there are for example three companies operating as a cartel they would then have a Human oversight stage where they would basically have to find to try to find any other possible explanation alternative to that and this is very closely related in the EU to article 14 of the AI act which is one of the most important article and Deals precisely with with human oversight. So for most regulators I imagine one of the most important challenges. It’s going to be to essentially draw this line like where does the The automation where there’s the AI empowered Sort of step begins and ends and when does the human human oversight beginning in what in what in what modes and finally One of one of the last question is like the role that large language models can actually play I did find I did find it interesting that in the in the survey In the survey published by Stanford out of 26 competition authorities only one the the Greek one explicitly mentioned An LLM power tool that they’re using now. I imagine that this is not the case I’m sure like plenty of other consumer authorities have been using LLMs throughout the last year But we’re probably reluctant to say so for obvious reasons, but it’s It seems like at the same time that regulators by defaults are, you know Risk adverse and these large language models do pose like quite quite important risks particularly in terms of in terms of privacy for example One of one of the competition authorities it was trialing An AI powered bar for to deal with whistleblowing so So a case where you know when you’re building a tool like that the privacy concerns are clearly very important so the thing the last question is does the generative capacity of these models have actually anything significant to offer to consumer regulation or other forms of AI probably more like low-hanging fruit are instead more suited for Regulatory environment. I think that’s it

Piotr Adamczewski:
Thank you very much Kevin, I just need to mention that definitely we are working on the setting the line properly between the place where AI is working and the way where we making the Oversight very shortly. We are closing to the to the end of the session, but very shortly I would like to ask each of the of the panelists The question about the future one minute per each Christine, can you start with you?

Christine Riefa:
Great absolutely. So one minute. I’ll use three keywords then and I think the future is a lot of homework on Classification and normative work. Are we all talking about the same thing? What really is AI? What are the different strands and trying to get the consumer lawyers and The users to actually understand what the technologists are really talking about Collaboration is the next I think there’s real urgency in and and I’m really welcome what we heard today about I spend really trying to gather and galvanize the consumer agencies because Project in common probably will be a better use of money and and then able to yield better result and my Last keyword would be to be Reactive and completely transform the way consumer law is enforced if we can move from the stage We’re at where we use AI to simply detect to a place where actually we can prevent the harm Being done to consumer then that would obviously be a fantastic Advancement for the protection of consumers around the world Thank You, Melanie

Melanie MacNeil:
Thanks Christine, um, yeah, I think as Businesses are always going to move quickly. Where’s the chance for money to be made? They’ll do it and they’re unrestricted in many ways compared to regulators who are often too slow to address the problem So I think collaboration is the key and sharing our learnings so that we can all Move quickly to address the issue and have a good future focus on it You know really recognizing that we can’t make regulations at anywhere near the pace that technology is advancing And I think honesty in the collaboration is key So we need to not be afraid to share things that we tried that didn’t work And explain why they didn’t work so that other people can learn from our mistakes as well as our successes Thank you, Melanie Angelo

Angelo Grieco:
Uh, yes, uh, thank you for us. It’s basically Our priority for the next year will be to try to improve increase the use of AI investigations so We will we would like to to do to do first of all more activities to monitor compliance like sweeps We would like to develop the technology to to make this tool able also to Sweep and monitor images videos sounds um, so basically to really to really be Fit, you know for for for what they need to monitor on the digital reality And then to cover different type of infringement indicators, you know, one of our focus focuses will be scams and counterfeiting But on the misleading advertising side, for example uh, what we mentioned we would like to to use it for for for a number of of of Of bridges like for example, the lack of disclosure of material connection in between influencers and traders And then what we would like to do also is to improve um, and that’s what you also mentioned pietro earlier the The case handling side so, um to try to Put this tool to make it even easier also for investigators than to use the evidence A national level as we know that evidence that the rules concerning the gathering of evidence are very national, you know Jurisdiction specific, you know, there may be different a screenshot maybe maybe sort of enough in a country But not in another So we would like the tool also to help and already gather, you know and as much as possible the evidence in the format which is required for on behavioral experiments, we We are also planning to do Seven more studies until the end of next year And one basically every 10 weeks And continue yeah Thank you very much, thank you and sally

Sally Foskett:
Yes, thanks so a priority for us in the near future is actually uh Going back to basics and thinking about our sources of data that we have available um, we’ve been giving thought to Trying to make better use of data that’s collected by other government departments As well as data that we could potentially obtain from data brokers from other parties Hospitals even for instance and also data that we can collect from consumers themselves For example making better use of social media to detect issues

Kevin Luca Zandermann:
Thank you sally and last word from kevin so so I think for me, uh Essentially what i’ve what i’ve said what I said before like I would recommend to regulators to actually have a sort of Retrospective, uh dialectic with ai so to To sort of answer like to address the questions about human oversight Where does the automation start and end? Where does the human oversight start? Uh to basically look at past cases that they know very well And utilize tools such as you know The ones that are described in the financial authority is used to basically test The potential but also the limitation of these models and I think the best way to do it is this very sort of continuous process of Again of engaging with content with cases that you already know very well And you know, you perhaps may find that they I Detected patterns that you not they did not notice detected things. They did not notice or or perhaps you also may found that uh Some patterns that they are detected actually didn’t are not were not necessarily particularly consequential for um for for like the enforcement outcome, so I think I know the regulators are always Again like understaffed and Have to deal with limited resources, but I think dedicate some time to these types of sort of retrospective exercise to develop Ex-officio tools can be extremely useful especially In in realities like the eu where we will have to deal with a very but a very significant piece of legislation on ai Uh, who’s you know certain details particular and human oversights are not necessarily fully clear so inevitably Uh, this practice like dialectic process will have to will have to happen to understand like what is the right model to operate?

Piotr Adamczewski:
Yes, thank you very much and yeah, definitely I I made my notes and we we will Have a lot of work, uh to do in the near future a lot of things to to to to classify a lot of meetings collaboration And definitely the outcome will be proactive. I strongly believe in the in the work which we are doing And now I would like to close the panel thank all the panelists for the great discussions and of course thank the organizers for enabling us to to have this discussion and To to be a little bit late with the last session, thank you very much

Angelo Grieco

Speech speed

150 words per minute

Speech length

2209 words

Speech time

881 secs

Christine Riefa

Speech speed

146 words per minute

Speech length

1487 words

Speech time

613 secs

Kevin Luca Zandermann

Speech speed

174 words per minute

Speech length

1931 words

Speech time

667 secs

Martyna Derszniak-Noirjean

Speech speed

160 words per minute

Speech length

700 words

Speech time

262 secs

Melanie MacNeil

Speech speed

161 words per minute

Speech length

2891 words

Speech time

1077 secs

Piotr Adamczewski

Speech speed

142 words per minute

Speech length

2102 words

Speech time

886 secs

Sally Foskett

Speech speed

174 words per minute

Speech length

1666 words

Speech time

575 secs