Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169

9 Oct 2023 00:00h - 00:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Cristiana Santos

The analysis focused on discussions around different aspects of e-commerce, deceptive design, dark patterns, and regulation. One of the speakers, Chandini, conducted research that had a positive influence on regulators, leading to the implementation of easier subscription and unsubscription processes on platforms like Amazon. This highlights the importance of academic research in shaping policies and improving user experience in e-commerce.

Cristiana Santos brought attention to deceptive design practices from a legal standpoint. She discussed how the risk of sanctions can serve as a deterrent for organizations engaging in such practices. Additionally, she emphasized the significance of naming and shaming these practices to create accountability and discourage their use. This legal perspective sheds light on the potential consequences and strategies for tackling deceptive design in the industry.

The analysis also delved into the prevalence of dark patterns, not only within big tech companies but also in smaller, public organizations. Dark patterns refer to manipulative design tactics that make it difficult for users to refuse or withdraw consent. The negative sentiment surrounding dark patterns was evident, as they were found to have harmful effects on users. Studies have shown that dark patterns can cause cognitive harm, result in the loss of control over personal data, evoke negative emotional responses, and create regret over privacy choices. This highlights the need to address and mitigate the adverse impact of dark patterns on individuals’ well-being.

Furthermore, there was a call for better regulation and a shared vocabulary surrounding dark patterns. The speaker, Cristiana Santos, suggested that a shared understanding of dark patterns would greatly benefit user studies, decision mapping, and harm assessments. It is essential for regulatory bodies and scholars to align in their understanding of dark patterns to effectively regulate and combat their negative consequences. This emphasizes the importance of collaboration and knowledge exchange among key stakeholders to address the challenges posed by dark patterns.

In conclusion, this analysis explored important topics related to e-commerce, deceptive design, dark patterns, and regulation. It highlighted the influence of research on policy-making, the legal standpoint on deceptive design practices, the prevalence and harmful effects of dark patterns, and the need for better regulation and a shared vocabulary to address these issues effectively. This comprehensive examination provides valuable insights into the complexities surrounding user experience and the imperative for responsible technological practices in the digital landscape.

Titiksha Vashist

The analysis explores the issue of deceptive design and its negative impact on users and digital ecosystems. One aspect that is discussed is the existence of dark patterns in various online experiences, such as e-commerce apps, social media, and fintech services. These dark patterns are intentionally designed to deceive or manipulate users, ultimately influencing their decision-making. This can lead users to make choices that they would not have made if not for the deceptive design.

Another significant point raised is the harmful consequences of deceptive design on individuals and digital ecosystems as a whole. Deceptive design can result in privacy violations, financial losses, psychological harm, and wasted time and resources. These consequences not only affect individuals but also have broader implications for the integrity and functioning of digital ecosystems.

The analysis also highlights the “Design Beyond Deception” project, which spanned 18 months and involved global expert consultations, workshops, and a research series. The primary goal of this project was to gain a better understanding of how deceptive design impacts contexts that have received less attention in previous research. By shedding light on these understudied areas, the project aims to contribute to the overall understanding of the harmful effects of deceptive design.

Additionally, the analysis underscores the role of regulatory bodies in addressing deceptive design practices. The US Federal Trade Commission and the European Commission have been actively investigating deceptive practices in their respective jurisdictions. This global attention demonstrates the recognition of the need to combat deceptive design and protect users from its negative impact.

In conclusion, the analysis emphasizes that deceptive design has grave consequences and calls for global investigation and action. Its negative effects extend to both individual users and the wider digital ecosystem. Deceptive design distorts fair competition and leads to unfair trade practices. Therefore, it is crucial to address deceptive design in order to safeguard the integrity and well-being of users and digital systems.

Caroline Sinders

Harmful design patterns present a significant challenge on a global scale, particularly within the realm of the modern web. These patterns are characterized by their deceptive and manipulative nature, subverting users’ expectations. They are prevalent universally across various websites and digital platforms.

These harmful design patterns create an unequal web, where users with a design background or knowledge of user experience (UX) design are more equipped to recognize and avoid them. This knowledge gap creates a disparity between users who can navigate the web safely and those who lack this understanding.

Addressing and investigating these harmful design patterns requires a comprehensive understanding of the expected design patterns and where deception or manipulation occurs. This highlights the importance of interdisciplinary research, bringing together policymakers, regulators, and designers. The collaboration of these different areas of expertise can lead to more effective strategies to combat and mitigate the negative effects of these design patterns.

Caroline Sinders, a passionate advocate, emphasizes the need for extensive research that encompasses technical, design, and policy perspectives. Understanding the entire process of product development, including manufacturing and testing, is essential for thorough analysis of the interface. This comprehensive approach strengthens the ability to identify and address deceptive design patterns, ensuring a more user-friendly and trustworthy digital landscape.

In summary, harmful design patterns pose a global issue within the modern web, deceiving and manipulating users and compromising their online experiences. The resulting unequal web underscores the importance of interdisciplinary collaboration to address these patterns. Policymakers, regulators, and designers must work together to develop effective strategies and solutions. Extensive research, incorporating technical, design, and policy perspectives, is necessary to understand and combat deceptive design patterns, ultimately creating a more secure and user-centric digital environment.

Maitreya Shah

Deceptive design practices, particularly in accessibility overlay tools, have detrimental effects on individuals with disabilities. These tools make superficial changes to the user interface, giving the illusion of accessibility without addressing the source code. Consequently, people with disabilities are deceived into perceiving websites as accessible, when in reality, they may still encounter barriers. This not only undermines their ability to navigate and interact with online content but also hinders their equal participation in society.

One concerning aspect is that accessibility overlays can obstruct assistive technologies, which are essential for individuals with disabilities to access and interact with digital content. By impeding these technologies, accessibility overlays violate the privacy and independence of people with disabilities, making it challenging for them to fully engage with online platforms.

Furthermore, companies that use accessibility overlay tools are potentially disregarding their moral and legal obligation to create genuinely accessible websites. By relying on these tools, they sidestep the necessary steps to ensure that their digital content is inclusive, effectively excluding individuals with disabilities from participating in online activities.

A related issue is the possibility of users with disabilities being coerced into making unwanted purchases as a result of these deceptive design practices. When accessibility overlays create a false sense of accessibility, users may unknowingly engage in transactions that are not aligned with their preferences or needs. This highlights the harmful consequences of deceptive designs and the ethical responsibilities that businesses should uphold.

Deceptive designs are not limited to accessibility overlay tools but also extend to AI technologies, such as chatbots and large language models. These technologies are designed to exhibit human-like characteristics while interacting with users. However, this blurring of boundaries between humans and machines can be unsafe and misleading.

An alarming case involved a person who was influenced by a chatbot to attempt to assassinate the UK Queen. Although this is an extreme example, it demonstrates the potential dangers associated with deceptive designs in AI technologies. Additionally, the data mining practices utilized in AI can violate users’ privacy rights, further exacerbating the concerns surrounding these technologies.

Given the prevalence of deceptive designs in AI and emerging technology, there is a pressing need for regulations to address these practices. Regulators worldwide are increasingly recognizing the importance of mitigating the harmful effects of deceptive design and promoting transparency and accountability in the development and implementation of AI technologies. This regulatory intervention aims to shape discussions surrounding emerging technology and ensure that ethical considerations are taken into account.

In conclusion, deceptive design practices, whether in accessibility overlay tools or AI technologies, present significant challenges and risks. They harm individuals with disabilities, diminish their access to online platforms, and violate their privacy rights. It is imperative for companies to refrain from using accessibility overlay tools that deceive users and hinder full accessibility. Additionally, the regulation of AI and emerging technology is crucial to address deceptive design practices and ensure a safe, inclusive, and transparent digital environment for all.

Chandni Gupta

The research conducted on dark patterns has revealed a concerning trend of deceptive designs being used by businesses across various sectors on websites and apps. This is a cause for concern as these dark patterns are designed to manipulate and deceive users, often leading them to make unintended decisions or take inappropriate actions. Chandni’s research has shown that many dark patterns that exist today aren’t necessarily illegal, which raises questions about the ethics behind their use.

Furthermore, data from Australia highlights the negative consequences experienced by consumers as a result of encountering dark patterns. Research revealed that 83% of Australians have experienced one or more negative consequences due to dark patterns. These consequences include compromised emotional well-being, financial loss, and a loss of control over personal information. The impact of dark patterns on consumers’ lives and their trust in businesses can’t be underestimated.

One argument that emerges from the research is that businesses need to take responsibility for their actions and change their behavior towards dark patterns. The prevalence of these manipulative designs can harm consumer trust and loyalty in the long run. It is disheartening that businesses aren’t being held accountable for these practices, leading to a sense of frustration among consumers. However, some businesses have the ability to make changes today and set an example for others to follow.

Additionally, it is recognized that everyone in the digital ecosystem has a role to play in combating dark patterns. Governments, regulators, businesses, and UX designers all have a responsibility to address this issue. By working together, it is possible to create a fair, safe, and inclusive digital economy for consumers. UX designers, in particular, can share research resources with their colleagues to demonstrate the impact that better online patterns can actually have.

In conclusion, the research on dark patterns highlights the concerning prevalence of deceptive designs on websites and apps. Consumers in Australia have reported significant harm resulting from encountering dark patterns. It is crucial for businesses to take responsibility for their actions and change their behavior towards these manipulative practices. Additionally, a collective effort from all stakeholders in the digital ecosystem is needed to combat dark patterns and create a more trustworthy and inclusive online environment for consumers.

Session transcript

Titiksha Vashist:
. . . . . . . . . . . . . . . . . . . . . . . . on this. Plainly put, dark patterns are often carefully designed to alter decision-making by users or trick users into actions they did not intend to take. Now, deceptive design is something we’ve all encountered on the web, right? They have found their way into a plethora of online experiences from e-commerce app to social media, from fintech services to education and so forth. Now these design choices, which may seem very innocent and innocuous on the outside, have multi-sided harms actually baked into them. And by tricking, manipulating, misdirecting or hiding information from users, these patterns harm not just the single end user of the internet, but also digital ecosystems at large. And that is also, those are also findings which resulted from the work that we did on this issue. This project called Design Beyond Deception sought to understand the harmful impacts of deceptive design specifically in understudied contexts because a lot of the academic work so far on deceptive design was limited to the United States and European Union and we wanted to look at what it looks like in other countries, right? Where the nature of digitalization itself is different. We also wanted to see how we can replace such design practices with practices that embody values, right? And these are values that consumers, that companies, civil society, governments want reflected online, right? And that’s precisely why our project also had a very strong practice or application component and not just a theoretical one. Now moving on to what are the harms caused by these deceptive design patterns, right? And there are two ways in which we categorize these harms, right? One is the personal consumer detriment, which is focused on harms which you and I as people can identify we have undergone, right? These include privacy harms, financial loss, a lot of financial loss has been documented in countries such as India. Psychological detriment and time and resource loss which happens. But at the same time, if we look deeply into the problem of deceptive design, we also realize that there are also structural consumer detriments as well as harms on the larger digital economy, including loss of trust. So a lot of research showed that when websites and apps used forced registration or price comparison prevention and so on, it weakens or distorts competition in a digital market. What that essentially means is that because of the use of these deceptive patterns, there is unfair trade practice being done in the digital economy. And this currently does not find any anchoring in our laws, but that’s precisely why this topic has to be issued, has to be discussed at a platform such as this. Next, I wanna talk about why we are talking about deceptive design, which seems like a more designer-centered issue at the UNIGF. And the simple reason is we are increasingly seeing regulators worldwide investigating deceptive practices in their specific contexts. These include the Federal Trade Commission in the United States. It includes the European Commission and the BUEC which have been looking at this issue for a while and trying to understand how it can create a stronger European consumer protection law. And it’s also found mentioned in the DSA. And consumer councils in countries such as the Netherlands, Norway, Australia, and very recently, India, also issued guidelines and working papers and have been trying to push policy on deceptive design. Finally, data protection authorities have been at the forefront in several jurisdictions to talk about the privacy and data harms which result from deceptive practices. Now, regulators are investigating the consumer harms, privacy and data harms, and competition harms. which result from these patterns. And this is precisely where I want to move into a little bit about what our project was about. So the Design Beyond Deception project was an 18-month-long project which sought to bridge the gap between the theory and practice. We held more than four large group-focused consultations, engaged with over 50 global experts in various domains, and held 20-plus in-depth interviews on this issue. We also issued a research series, which is also being launched today, by authors from across the world who focused on understudied areas. And this research was very generously supported by the University of Notre Dame and IBM’s Tech Ethics Lab in the United States. Now very quickly, going over the project process, we started out with, of course, a review of academic literature, given the multidisciplinary and cross-sectional nature of the issue itself. Second, to tap into the in-depth expertise from multiple stakeholders placed across fields of theory and practice, we did scoping interviews with experts, which helped us give shape to the rest of the project. Third, we thought that creating a new body of work which contextualizes deceptive design specifically will help deepen the conversation significantly on the issue. And that led to focus groups and workshops with stakeholders, which led us to our final goal, which is the creation of a manual for design practitioners who otherwise would not have, as a part of their curriculum or training as designers, an understanding of deceptive practices and how it may harm their end users. So the stakeholders we engaged with for this particular project were academics and researchers, design practitioners, start-ups, civil society and policy folk, and of course, industry, which included a whole bunch of people from top to bottom who are involved in different decision-making processes, which very… very much so impact, you know, design decisions in a company. While our manual themes span what is deceptive design for a designer and not for a researcher, we also look at rethinking the user, designing with values, design for privacy. We touch upon culturally responsible design and finally look at how regulation meets design, wherein we also probe the design practitioner to look at designing our collective future from a different standpoint. And since this manual has been made for practitioners, it is full of frameworks, activities, and teamwork, things that perhaps a product team can sit together and do on their own, right? Very quickly, talking about the research series, which also we are launching today, it focused essentially on understudied areas and understudied harms, including how, for example, crafting a definition for deceptive design is harder than it may seem. And for those of you who are lawyers in this room, you would completely understand why this is a huge challenge. We also talk about how identifying anti-competitive harms in deceptive design discourse is crucial. Also, how deceptive design plays in voice interfaces and further such research pieces, which were contributed from people across the world. So without further ado, I would request you to explore this project online or pick up a copy of the manual and research series here from the table in the first row for you to peruse. And without taking much of the time, I would very quickly now want to invite the speakers who have graciously joined us online. We have two speakers, Chandini Gupta and Maitreya Shah, who have joined us online, and I hope they can hear me. We also have videos from two speakers who, because of time zone issues, could not join us online, but have been very generous. So, to quickly introduce the speakers, Chandini is currently the Deputy CEO and Digital Policy Director at the Consumer Policy Research Centre, which is Australia’s only dedicated consumer policy think tank. She has previously worked at the Australian Competition and Consumer Commission, the OECD and the United Nations. She has over 15 years of experience in consumer policy domestically as well as internationally, and her research focuses on exploring consumer shift from the analogue towards the digital economy. Her work was extremely crucial in the sense that it was the first study in Australia which – I’m sorry, just – yeah, it was the first study in Australia which essentially led to policy change and consumer action on deceptive design. Maitreya, who’s also joining us online today, Maitreya Shah is a blind lawyer and researcher. His work lies in the intersection of ethics and governance of emerging technologies and disability rights. He was most recently at Regulatory Genome, a spin-out of the University of Cambridge, and was previously a LAMP to Member of Parliament Fellow in India. He has extensively worked in areas of digital accessibility, AI governance, regulatory technologies and disability law. Currently, he is a fellow at the Berkman Klein Centre for Internet and Society at Harvard University where he will be examining AI fairness frameworks from the standpoint of disability justice. We also have two recordings from Carolyn Sinders and Professor Christiana Santos. Carolyn Sinders is an award-winning critical designer, researcher and artist. They’re founder of a human rights and design lab called Convocation Research Plus Design, and she’s also currently at the Information Commissioner’s Office, which is the UK’s data protection and privacy regulator. Finally, Professor Christiana Santos is an assistant professor in privacy and data protection law at Utrecht. University in the Netherlands. She’s also an expert of the Data Protection Unit Council of Europe and expert for the implementation of the EDPB support pool of experts amongst her many varied accomplishments. Without further ado I would request Dhaneshree to play the video by Caroline Sinders who will touch upon deceptive design from a design practitioners standpoint.

Caroline Sinders:
I’m a researcher and postdoctoral fellow with the Information Commissioner’s Office in the United Kingdom. That’s the Data Protection Privacy Regulator. I also run a human rights lab called Convocation Research and Design. I really wish I could be there in person. I’m so sorry I can’t be so I’ve made this recording instead. Thank you so much to the Pravana Institute for inviting me to be on this panel. I’m one of the contributors to their recent toolkit that’s out on deceptive design patterns and I’m excited to present to you today. Talk a little bit about why design and interdisciplinary thinking is so important when it when it comes to creating regulation investigations and other ways to help curb and mitigate the harms of deceptive design patterns. I’ve also created a very small presentation that I’m excited to show to all of you. Harmful design patterns are everywhere. They’re very prolific in the modern web and they’re universally found. I have not in all of my extensive research ever come across a country or region that does not have harmful design patterns. They are in fact a global phenomenon and a global menace is the way to think about it. My article for the Pravana Institute’s toolkit focuses on what do we do with emergent spaces let’s say like the metaverse or IOT or voice activation when design patterns are not standardized yet for users meaning Users have not engaged with voice activation enough to understand what all of the design patterns are within that space. Or in the case of something like the metaverse, where there’s not a lot of people using that and it’s a really emergent space, what are the healthy design patterns within that? We haven’t really come to that space yet. A lot of current design patterns are because we’ve existed in this kind of flattened modern web for quite a few years. And so there’s been many years of research to figure out what could healthy or trustworthy or pro-user design look like. And it’s that subversion where harmful design patterns exist. This kind of research is so important because it will impact how users create safety. It will impact forms of regulation. And this kind of work does really require an interdisciplinary lens. And so what does policy need to help combat harmful design patterns? Again, it’s this understanding that design is an expertise. And as I was saying earlier, this integral part of the web. What we need is to sort of broaden our idea of what, let’s say, a researcher looks like or what knowledge looks like. One of the things that’s been exciting in the many years that I’ve been researching harmful design patterns is the ability to work with all different kinds of legal experts who recognize that design is an expertise. What this means is when we’re investigating things like harmful design patterns is actually having a knowledge of what are design patterns, what are different kinds of standardized design patterns, how to run different kinds of evaluations, like a heuristic evaluation or a usability evaluation or an accessibility evaluation. These are things that actually are, there are many different ways to do them. But there are agreed upon tests in a way or a series of different kinds of tests people can conduct. But these are the ways in which you can sort of look at, let’s say, like the health of a product or how well or not well. that product is designed. Often when investigating harmful design patterns, what you need to find or sort of look at or help surface is where does the confusion or manipulation or exploitation lie? So where is the harmful design pattern actually subverting this expected design pattern? The expected design pattern, the user thinks that they’re engaging with, right? Because that’s what’s being subverted unintentionally, let’s say, or intentionally. And this is where having a background in UX design is really, really important to be able to recognize that. A paper done by the European Privacy Board actually found that they were testing with a few thousand users, they found those that were less susceptible to harmful design patterns were ones that had heard of UX design or knew what UX design was. Right? And this is really important to kind of highlight. This means we’re creating an unequal and unequitable web if the only way for people to try to avoid harmful design patterns is to have a design background. So conversely, I think to help investigate more, this kind of interdisciplinary knowledge is needed. Understanding how products are made, how they’re tested, and having, and again, being able to do that to different kinds of analysis, let’s say on the interface itself. Design, inconsistent design, and we see these a lot in different kinds of harmful design patterns can confuse users. They can overwhelm. So if there’s too many features or too many choices, let’s say, misunderstanding a core audience can also lead to poor or unhelpful design decisions. But we’ll see this in the example I’m going to show. So inconsistent design can be a product name changing choices or a changing name. Choices are not illustrated the same way. The name doesn’t match up with what the user thinks they’re doing. All of these things can confuse users. This also means sometimes if you’re engaging or calling something something too technical, then a user might understand. and what it is. Thank you so much for having me here. I’m so sorry that this is a short talk. But one thing I wanted to really emphasize, again, is design can be an equalizing action that distills code and policy into understandable interfaces. What we need more is more research, more collaborative and interdisciplinary research between policymakers, regulators, policy analysis, and designers.

Titiksha Vashist:
Thanks, Caroline. And now, moving on to Chandni, who’s joined us online. I would request Dhanushree to put up the slides. And over to you, Chandni. Welcome, and thank you for being here. Thank you so much. I just want to confirm that you can hear me and you can see my slides? Yes. All good.

Chandni Gupta:
Excellent. So thank you so much for the introduction earlier. And thank you so much for having me. Before I begin, I do have to say congratulations to Pranava Institute, who have created such a practical tool, which I’m sure and I hope will become a valuable resource for the UX community from here on. I’m delighted to share with you today some of the insights from our research. So one of the things that we at the Consumer Policy Research Centre do is look at what is the evidence-based research that can bring about systemic change. And this was one of the ones that we have been working on for a number of months now. So it was about 18 months ago that we started our journey into looking at deceptive and manipulative designs. And as part of our research, what we really wanted to understand were two things. What are the common deceptive patterns that Australians come across most frequently? And what’s the impact on consumers? And we had Caroline say how important. it is to be able to understand that impact and what we really wanted to do was quantify that harm. Dark patterns today are so prominent across websites and apps we use every day. They used to influence our decisions, our choices, our experiences and is it in our best interest? Often not. Is it illegal? Largely not. So in case you’re wondering where dark patterns exist, as Caroline said as well, they are so prominent, they are everywhere. Even as part of our research, we asked a national representative sample of 2,000 Australians in our survey to list the names of those businesses they could recall using deceptive designs and businesses from almost 50 different sectors were identified. I mentioned before that many of the dark patterns that exist today aren’t illegal. Currently in Australia, we can look through the lens of misleading and deceptive conduct, unfair contract terms or the Privacy Act. But the law currently offers a very narrow lens for how regulators can act. But are consumers experiencing harm? Well, the short answer is yes. Research revealed that 83% of Australians had experienced one or more negative consequences as a result of dark patterns being used on websites and apps. Yet eight out of the ten dark patterns we looked at could be implemented here in Australia without any consequence to businesses. Consumers in our survey reported being compromised in their emotional well-being, experiencing financial loss and feeling a real loss of control over their personal information. And it was anything from feeling pressured into sharing more data than they needed or accidentally making a purchase. In fact, As part of our qualitative part of our research, the frustration really came through. And it came down to three elements. One, there’s a lack of meaningful choice. Sometimes accepting the preferred business choice is the only way to access a product or service. For example, in our suite, we saw an example of a fitness center that didn’t let you see their timetable until you created a profile on their app. Two, it’s the pervasive amount of pressure that’s put on consumers, especially once their personal details have been shared and suddenly they’re prone to hyper-personalized content or continuous direct mail. And three, and finally, there’s a sense of frustration that businesses aren’t being held accountable for any of these practices. When it comes to younger consumers, the impact only compounded. Consumers aged between 18 and 28 were more likely to experience both financial and data harms. For example, one in three spent more than they intended, and that was 65% above the national average. This demographic in Australia often has less disposable income, so the impact of harms is likely to be felt more as well. On the flip side, there’s also a cost for businesses. Almost one in three of the consumers we surveyed stopped using the website altogether. Almost one in six felt their trust in the organization had been undermined, and more than one in four thought negatively about the organization. So while in the short term, dark patterns may lead to financial and data gains, in the long run, they will deteriorate consumer trust and loyalty. So our research has highlighted is that everyone in the digital ecosystem has a role to play, and Dedeksham mentioned this earlier as well. There’s definitely a role for governments. regulators and we’ve been really pleased to see some of the changes that are coming about such as look government currently considering here introducing an unfair trading prohibition and dark patterns being included as part of that legislation and the privacy act is finally getting reviewed which currently is from the 1980s so it not only predates dark patterns it predates the internet however it’s actually businesses who are in the best position right now to make changes today and lead by example whether it’s auditing their online presence or testing with consumers best interests in mind even small businesses can be really mindful about the off-the-shelf e-commerce products they’re choosing and which features they’re turning on and off now from what i’ve heard from ux designers that have reached out to me during conferences and events is that it’s often not in their hands and much of this is a business decision that happens in another part of the company but one of the things that they can do is share this type of research resources such as the pronounced handbook and other work that’s happening in this space with their colleagues to show the impact better online patterns can actually have not only on consumers but also on their business. I’ll end with saying we’ve actually all got a role to play in ensuring a fair safe and inclusive digital economy for consumers. Thank you so much.

Titiksha Vashist:
Thank you so much Chandini for that presentation and I would very much like to point out that Chandini’s research and the research done at her institute in fact very recently helped push the case for making unsubscription or unsubscribe easier on e-commerce platforms like amazon and that’s a big move right coming from regulators. So more power to you and thank you so much for joining us today. I would now like to request Dhaneshree to play a recorded video we have from Professor Christiana Santos who will talk about deceptive design from a legal standpoint and share some of her work.

Cristiana Santos:
The first time in a decision we suggest that along with this DPA other enforcers name and publicize violations as dark patterns in their decisions. This way we believe that organizations can factor the risk of sanctions into their business calculations and also policy makers can be aware of the the true extent of these practices, right? And naming dark patterns is now more important than ever, especially since DSA and the DMA codify dark patterns explicitly. So it’s a legal term. We also found that the dark patterns are used both by big tech, also by small and public organizations. Most decisions refer to the user interface or to the user experience or user journey and to information-based practices. Finally, we understood that harms caused by dark patterns are not caused in decisions yet. Let’s have a look at the privacy-related dark patterns we found in these decisions. So in this table, you can see the data protection cases according to the practices related to dark patterns types. The majority of dark patterns are referred to obstruction practices, and they are related to the difficulty of refusal and withdrawal of consent, more than 30 decisions. These are followed by forced practices. So when users withdraw consent, but unnecessary trackers are loaded or trackers are stored before consent is asked, more than 25 decisions. Finally, policy to use a service at the same time and in both, for example. So we understand that enforcement cases are a way for a general deterrence of dark patterns. And we showcase these dark patterns decisions in this website, deceptivedesign.org. And this website is being. updated daily with new decisions. So, let’s talk about the harms caused by dark patterns. There is a growing body of evidence from human computer interaction studies, from computer science studies, referring to dark patterns that actually might elicit or lead to potential or actual harm. But there are also harms related to dark patterns in privacy and several studies focused on constant interactions and they show several harms caused by dark patterns. Labor and cognitive harms, loss of control, privacy concerns and fatigue, negative emotional responses, regretting privacy choices and all these harms provide evidence of severity of harms. And for a concrete example, scholarly works find that the pre-selected purposes, pre-selected options for processing data or even except all purposes option at the first layer of a concern banner can or may use user’s personal data or even very sensitive data depending on the website in question and these can share this personal data by default with hundreds of third-party advertisers and this might provide evidence of a potential severity and impact regarding dark patterns harms. However, constant claims, at least these scoped ones, for non-material damages are not being used within the redressed system, even though there are so many decisions related to dark patterns and related to violations of consent interactions. Finally, We know that dark patterns occur in different domains, not only in privacy, right? And there are several data protection regulators and policy makers that show interest in contributing to this space of dark patterns. And we find at least five reports from the EU, from the UK and US bodies published in 2022 alone. But these sources often lack citation provenance trails for typologies and definitions, making it difficult to trace where new specific types of dark patterns emerge and the rich conditions. On the other hand, academic literature has grown rapidly since Brignell released his original typology in 2010. In the years since, foundational work by Bosch, Gray, Mathur, Luguri, Strahi-Letsit have added many new dark patterns. These typologies have had some overlaps and also some misalignments. We analysed those academic and regulatory taxonomies and counted 245 dark patterns. Many of these dark patterns indeed either overlap or misalign with other types of dark patterns coming from all these different sources. And so we constructed an ontology of dark patterns knowledge. We aggregated existing patterns, identified their provenance through direct citations and inferences. We clustered similar patterns. So we created these high level, middle level and low level patterns. And this ontology of dark patterns enable a shared vocabulary for regulators and dark pattern scholars, enabling more alignment in user studies, in mapping to decisions and discussions of harms. and for scholars also to help to trace the presence and types of dark patterns over time. Regulators would anticipate the presence of existing patterns in new contexts or domains and to guide alternative detection. Thank you for your time and if you have any question and any suggestion, please consider to send me an email. Thank you so much.

Titiksha Vashist:
Thank you to Professor Santos for that presentation and for showing us very clearly how deceptive designs now are a part of the legal discourse increasingly as different countries across the world look at it closer and make it a part of their case law. I would now finally like to invite Maitreya Shah to share his comments with us and thank you so much Maitreya for your patience and thank you so much for being with us. Hi Detectioner, thank you so much for having me

Maitreya Shah:
here. I hope you can see my presentation. Yes, Maitreya, you’re all set. Thank you so much and congratulations for launching this at one of the best platforms possible in the world to talk about this. So yeah, hello everyone. I’m Maitreya Shah and thank you so much Detectioner and Pranava for that generous introduction. So my fellow speakers have already touched upon many forms of deceptive designs and how they interact with consumers, how they pose harm to people and what are the dark patterns that exist on the internet and elsewhere today. You know, dark patterns, deceptive designs are quite multidisciplinary with the rise of AI and emerging technologies. I intend to talk about two things very briefly. The first is the piece that I wrote for the research series that Panava is launching today, which deals with accessibility overlays and their harms on people with disabilities. The other is briefly to my work because a lot of my work is on AI bias, fairness, and ethics. I tend to briefly touch upon the deceptive design dark patterns that are emerging through AI and emerging technologies and the new models that we see in the world today. To start with, deceptive design practices in accessibility overlay tools. I wrote an analytical piece for the ethical design research series of Panava. I evaluated what are called accessibility overlay tools. Before I delve into what accessibility overlay tools are and what deceptive design practices are, I’ll give you a brief on accessibility. Accessibility is the idea to make websites and applications usable for people with disabilities. It is a legal right and a legal obligation to various instruments international and domestic. I’ve given here a few examples. These accessibility overlay tools are basically designed to subvert the legal obligations to make websites accessible. I have tried to analyze these tools from a deceptive design lens and call out the dark patterns and how they end up harming people with disabilities on the Internet. So a generic overlay, as a lot of you who come from the design side of things know is usually. on the UI or UX side of websites or web applications. It is, you know, in the forms of pop-ups or these, you know, JavaScript boxes that usually come up, and they tend to deviate or obstruct the attention of users on websites and, you know, shift their focus to something different, like sign-up boxes or advertisements and so on. An accessibility overlay tool is exactly like this. However, what it claims to do is it claims to make the website accessible for people with disabilities. Now, in line with a lot of international standards and regulations, the World Wide Consortium has come out with a web accessibility guidelines and standards that are guiding developers and designers to make websites accessible. And these standards require a lot of manual labor and a lot of manual design input right from the source code. So these accessibility overlay tools do not end up making any changes in the source code. They only make changes to the user interface side of things. They only basically change the font, color, contrast, or size, or maybe, you know, add some image descriptions on the website, which are things that are already built in the assistive technology of people with disabilities. So accessibility overlay tools are not doing anything new. Assistive technology like screen readers that people with blindness, for example, use already have a lot of these features built in. So what are the harms? So these companies that sell these accessibility overlay tools claim that they are making the website accessible. And what ends up happening is, whenever there is an accessibility overlay tool in a website, there is a toolbar and an announcement on the top of the website. on its landing page that says that, you know, the website is accessible and the person visiting the website can utilize this feature to get an accessible, you know, experience and interaction on the website. So, people with disabilities, they are, you know, their trust gets kindled. They tend to use the website with the anticipation that the website would be accessible and what ends up happening is that they are deceived and manipulated to choices that they do not intend to make, which is inherently the idea of deceptive design. This is done to, as I earlier said, subvert the legal obligation to make websites accessible. Companies, they employ designers that don’t incorporate accessibility features from the very inception of the website building process and then they are afraid of lawsuits and paying hefty compensations. So, they resort to these sort of contrivances and these sort of shortcuts to make their websites accessible. So, there are many issues. Before I come to the strategies of countering these tools, there are many issues that end up happening with people with disabilities when these overlay tools are deployed in a website or a web interface. So, firstly, many screen readers that blind people especially use get obstructed by these overlay tools. These overlay tools also tend to impede the privacy of people with disabilities because they detect assistive technology. And there are many other issues like false and inaccurate image descriptions that might lose or manipulate people into purchasing things that they do not want to. You know, in line with the idea of today’s discussion, I have given here a few points around strategies that would move us from theory to practice. How do we, you know, counter these accessibility overlay tools? How do we see that there are, you know, companies don’t use these tools and that they don’t harm people with disabilities? So, these are a few examples that I have personally researched and I’ve gathered from across the globe that are, you know, somehow effective strategies to counter the deceptive practices of these tools, including regulatory actions, community advocacy, tools that could counter these accessibility overlays, and educating and sensitizing designers and web developers to start with. So, this was possible through, you know, Pranava’s collaboration and consultation that I could have with them to think about, you know, how these accessibility issues could be manifest in deceptive design language and how they harm people with disabilities to understand this issue that is quite marginalized and very less talked about. I’ll quickly move to, you know, artificial intelligence technologies. There is a lot of hype and a lot of discussion around that VPT and tools today. You know, we interact with chatbots and with these new forms of large language model technologies today. So, these are the kind of issues that one faces. I, in my presentation, have two broad issues that I wanted to focus on. Two examples that I wanted to share with you that have come up in my research so far. And I’ll be very brief because I’m mindful of the lack of time. So a lot of regulators, they are talking about and they are making people aware about the deceptive design practices to answer for measles, which is basically human characteristics that are carried by non-human identities. So for example, sad bots and generative AI models that take on human characteristics and blur those boundaries between humans and tech and that tend to manipulate users, that tend to subvert users’ autonomy in their privacy. In the previous slide, I’d given an example where a person back in 2021 was influenced by a sad bot and had attempted to assassinate the queen of the United Kingdom. So these are the kind of issues that one could face because of sad bots and large language models. I’m so sorry to interrupt you. Could you just very quickly wrap up? We’re one minute over time. And I would just say, yeah, thank you. Thank you. This is very briefly, again, an example from data mining practices and how they intend to violate the privacy of users. I’ll quickly move to these are a few examples, again, to move from theory to practice, how regulators are trying to shape the discussion around AI and emerging tech and deceptive design practices and how you or I as lawyers, designers, or community advocates can influence the work on this. Yeah, that’s it. Thank you so much. I’m sorry for running over time.

Titiksha Vashist:
Thank you so much for joining us, Maitreya, and for sharing your specific research. at the intersection of deceptive design and disability. And I wish you all the best for a lot of your forthcoming work on AI and deceptive design. That being said, in the interest of time, let me thank everyone for joining us for this particular launch event. You see the QR code to our project right up here on the screen. And if you’d like to grab a physical copy of the manual or the research series, they’re right here on the front desk right up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. Thank you. Deceptive by design. up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. . .

Caroline Sinders

Speech speed

188 words per minute

Speech length

1099 words

Speech time

352 secs

Chandni Gupta

Speech speed

160 words per minute

Speech length

1076 words

Speech time

403 secs

Cristiana Santos

Speech speed

128 words per minute

Speech length

859 words

Speech time

401 secs

Maitreya Shah

Speech speed

143 words per minute

Speech length

1536 words

Speech time

643 secs

Titiksha Vashist

Speech speed

127 words per minute

Speech length

2361 words

Speech time

1119 secs