Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South

14 Sep 2023 09:15h - 10:15h

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis provides a comprehensive examination of three different arguments related to regulation and trade agreements.

The first argument centres around risk-based regulation. It argues that such regulation only begins when a concrete risk is identified, often disregarding instances where the impact may not be considered extreme despite a high likelihood of occurrence. The argument emphasises that risk is calculated by multiplying likelihood with impact. It further highlights the concern that certain AI applications, which may have a high likelihood but a perceived low impact, often go unregulated. The overall sentiment of this argument is negative, indicating a concern that risk-based regulation may overlook potential risks due to a narrow focus on extreme impacts.

The second argument supports rights-based regulation. This regulatory approach insists on transparency from every AI system, regardless of risk. The argument points out that rights to transparency exist in every single case, creating legal obligations for companies to provide information. This argument demonstrates a positive sentiment towards rights-based regulation, as it establishes a baseline where transparency is required for all AI systems, ensuring accountability and public trust.

The third argument explores trade agreements, with a neutral stance from the audience. The audience’s belief or assumption suggests that they perceive trade agreements to be more risk-based. Although the details of this argument are limited, it provides insightful perspective into the audience’s perception of the nature of trade agreements. The audience’s neutrality implies a reserved stance, neither fully supporting nor opposing the notion that trade agreements are more risk-based.

Overall, the analysis highlights the contrasting perspectives and approaches to regulation, specifically the comparison between risk-based and rights-based regulation. It underscores the importance of striking a balance between tangible risks and the potential impact of AI applications. Additionally, the analysis offers thought-provoking insights into the perceived relationship between trade agreements and risk.

Speaker 1

Employers are increasingly using AI to manage their workforce, affecting various aspects such as hiring, promotions, and terminations. However, the use of a risk-based approach in AI decision-making makes it difficult for workers to challenge or contest decisions made by AI systems. To address this issue, proponents argue for a rights-based approach to AI regulation, prioritising the protection of workers’ rights. One proposal is to require companies using AI to demonstrate that their AI systems do not violate workers’ rights before implementing them. This obligation would help ensure that workers’ rights are safeguarded and prevent potential violations.

In the context of trade agreements, there is often a ban on source code disclosure, with minimal exceptions. Critics argue that this approach can be harmful as it limits transparency and accountability in AI systems. Moreover, control over data is crucial for economic development. Currently, investment decisions regarding data usage are primarily driven by the private sector. To overcome this, advocates contend that data should be used in the public interest to address societal problems.

Another important aspect is the relationship between digital industrialization, data sovereignty, and development. Countries should have access to the data they produce, as it plays a vital role in their progress. However, concerns arise over the monopolization of data by big tech corporations, leading to digital colonialism. Opposing the big tech’s push for the ‘free flow of data’ is justified, as it often results in one-sided corporate transfer and exploitation of data from developing countries.

Furthermore, maintaining policy space for local regulation in the public interest is essential. There is a concern that including environmental services in trade agreements may limit the ability to regulate in the public interest. Preserving policy space allows for regulations that benefit society and prevent undue influence or interference in local issues.

In summary, the increased use of AI in the workplace has significant implications for workers’ rights, necessitating a rights-based approach to AI regulation. A ban on source code disclosure in trade agreements is seen as harmful, while control over data is viewed as crucial for economic development, with emphasis on using it in the public interest. Digital industrialization and data sovereignty are crucial for development, while opposing the ‘free flow of data’ protects developing countries from exploitation. Lastly, maintaining policy space for local regulation ensures tailored regulations that serve local needs, including regulation in the public interest.

Mariana Rielli

Brazil has been diligently working for the past two years to establish a comprehensive legal framework for the regulation of artificial intelligence (AI). The proposed AI regulation in Brazil is rights-based, emphasizing the protection of fundamental rights and data protection in accordance with constitutional provisions. This approach takes into account Brazil’s history of racial inequality and discrimination, ensuring that the regulation addresses the country’s social challenges.

However, Brazil’s involvement in trade agreements has raised concerns about potential conflicts with its internal AI regulations. There seems to be a shift in Brazil’s stance to align more closely with the United States, which disregards its own internal regulations. This conflict between trade agreements and internal regulations may pose obstacles to the effective implementation of the proposed AI bill.

One aspect of the proposed AI regulation in Brazil is its risk-based approach. Critics argue that this approach only considers tangible risks, neglecting likely occurrences with lower impact. They propose a more comprehensive risk assessment that also takes into account probable scenarios that may not have extreme consequences. This highlights the need for a balanced risk assessment considering both likelihood and impact.

Transparency is another crucial element addressed in the proposed AI regulation in Brazil. Companies are required to provide information about their AI systems, and individuals have the right to litigate if their rights are violated. This rights-based approach ensures a minimum level of transparency in every case involving AI systems, irrespective of associated risks.

The European Union’s AI Act proposal also follows a risk-based approach, similar to the proposal under consideration in Brazil. This suggests some alignment in the global approach to AI regulation. However, it is important to distinguish trade agreements from risk-based AI regulation and avoid compromising the integrity of AI regulation due to trade agreements.

Advocates for comprehensive AI regulation argue that real-life examples of AI’s impact on second-generation rights should be considered. One notable example is the scandal in the Netherlands involving automated decision-making systems used to identify potential welfare fraud. Unfortunately, this system wrongly affected innocent individuals, who suffered the loss of their livelihoods. Such instances underline the importance of robust regulation to prevent future abuses and protect individual rights.

Furthermore, there is a significant power and information asymmetry surrounding AI’s impact, with most people unaware of the consequences and unable to trace them back to AI algorithms. This knowledge gap perpetuates power imbalances and undermines transparency. Addressing this issue requires fostering collective imagination, creativity, and accessibility to AI technology, empowering individuals with the necessary knowledge to make informed decisions and prevent the concentration of power.

In conclusion, Brazil’s ongoing efforts to establish a comprehensive legal framework for AI regulation are commendable. The proposed regulation adopts a rights-based approach, prioritizing fundamental rights and data protection. However, challenges arise due to Brazil’s involvement in trade agreements, potentially conflicting with its internal regulations. The risk-based nature of the proposed regulation necessitates considering likely occurrences with lower impact. Transparency requirements and lessons from real-life examples of AI’s impact must be incorporated into the regulatory framework. Addressing the power and information asymmetry regarding AI’s impact is crucial for ensuring a fair and equitable AI landscape in Brazil and beyond.

Sofia

Artificial Intelligence (AI) in Latin America faces numerous challenges, as highlighted at a regional summit. One concern raised is limited access to data needed for AI tool development. Many developers struggle to find suitable local data and must buy it from Europe or Asia, hindering accurate and region-specific AI applications. The KIPPO 2023 summit emphasized the importance of having access to relevant and reliable data for effective AI tools.

Another challenge is the impact of free trade agreements on AI development and regulation in the region. For example, the Trans-Pacific Partnership does not permit taxing data flows or access to source code. This creates a gap between AI regulators and foreign affairs authorities, potentially disadvantaging Latin American countries that wish to retain some data locally. This issue raises concerns about regulating digital rights and data flows in the region.

The summit stressed the need to evaluate the environmental and social impacts of AI when creating regulations. The app ‘Rappi’ was cited as an example, where an algorithm requiring unnecessary worker movement caused environmental and safety concerns. However, algorithm changes can mitigate such impacts while maintaining profitability. This highlights the importance of considering the broader implications of AI on climate action, decent work, and public health.

Latin America also calls for more time and resources to develop its own AI technologies and regulatory frameworks. The dialogue between private and public sectors regarding AI development is still in its early stages, and existing trade agreements may restrict the region’s ability to create tailored policies and regulations. However, Latin America has the potential to build sovereign technologies addressing regional challenges.

Regulating AI presents challenges due to its rapidly evolving nature. Regulators struggle to keep pace with AI development and predict future impacts. This poses difficulties in developing appropriate assessment and regulation mechanisms, making effective governance a constant challenge.

The impact of AI on collective rights, particularly in the workplace, is a significant concern. Trade unions advocate for the defense of workers’ rights and demand the right to assess AI systems. Unions ensure AI systems prioritize collective rights and well-being, and can demand necessary changes when workers are adversely affected.

Additionally, there is a growing call for more democratic regulation of AI. Community rights should be given equal priority alongside individual rights. Unions play a vital role in AI regulation, enabling them to contribute to the decision-making process. Prioritizing community rights and involving unions can lead to inclusive and ethical AI development and governance.

In conclusion, the AI summit in Latin America highlighted the challenges and concerns surrounding AI development and regulation in the region. Limited access to data, the impact of free trade agreements on digital rights, environmental and social considerations, the need for more resources, the evolving nature of AI, the impact on collective rights, and the call for democratic regulation are key focus areas. Effective and inclusive AI policies and practices in Latin America require a collaborative approach involving multiple stakeholders.

Moderator

Latin America has immense potential in the field of Artificial Intelligence (AI) and is dedicated to developing its own technological solutions to tackle regional issues. The region is home to exceptional engineers and experts who are creating top-quality AI tools. Furthermore, Latin American countries are actively collaborating with UNESCO and adhering to AI principles. Remarkably, there is a growing number of startups and innovative tools based on AI, particularly in healthcare and education.

Despite this potential, Latin America encounters significant challenges, especially when it comes to acquiring relevant data. Engineers and developers often lack access to suitable data, compelling them to purchase it from European and Asian countries. Consequently, the AI tools produced are less accurate as they do not adequately reflect the local populations they aim to assist.

Another obstacle lies in the need for more time and policy freedom to establish regulations governing AI usage. The region experiences delays and ill-informed negotiations in harmonising AI regulations within Latin America and the rest of the world. It is crucial to foster more mature and informed debate and dialogue between the public and private sectors to establish effective and appropriate AI regulations.

Free trade agreements, such as the Trans-Pacific Partnership (TPP), present an additional challenge to the development and control of AI in Latin America. These agreements restrict the taxing of data flows and limit access to data and source code. As a result, they can impede the region’s ability to regulate AI effectively within its own boundaries.

Moreover, the current risk-based approach to AI places workers at a disadvantage. Under this approach, the burden of proof falls on the worker to demonstrate that their rights have been violated by an AI system. This is often difficult, as access to the inner workings of the AI system is typically locked behind intellectual property rights and trade secrets.

However, adopting a rights-based approach to AI could ensure greater accountability and prevent harm to workers. In this approach, companies would be required to demonstrate that their AI systems do not violate workers’ rights before implementing them. This proactive approach has the potential to address issues before they occur, safeguarding workers’ rights in the process.

Based on the analysis, it is evident that Latin America requires proactive regulations to protect workers’ rights against the unchecked implementation of AI. The labour ministry should have the authority to verify AI software for potential violations before its implementation in the workplace. The current practice, which heavily favors companies by allowing them to shield their AI behind intellectual property and trade secrets without proper scrutiny, needs to be reevaluated.

In conclusion, Latin America possesses significant potential in the field of AI, with exceptional engineers and experts creating top-quality AI tools. However, there are challenges to overcome, including the need for relevant data, ample time for policy development, and the restrictions imposed by free trade agreements. Additionally, the current risk-based approach to AI disadvantages workers, underscoring the importance of adopting a rights-based approach. Implementing proactive regulations that protect workers’ rights and allowing scrutiny of AI systems by the labour ministry are crucial steps towards maximising the potential benefits of AI in Latin America.

Session transcript

Moderator:
at the Transnational Institute. She’s also the Director of the Observatory of AI Social Impacts at the National University Tres de Cedrero in Argentina. She’s a long time expert on the intersection of trade policy and digital rights across Latin America. So, Sofia, can you give us more of the lay of the land here and what trade agreements have already passed with these terms and where there are some dangers ahead?

Sofia:
Well, thank you, Melanie. I’m going to put the mic just in case it’s the recording going on because I assume everybody’s hearing me okay. So, in Latin America, the situation is really promising because I think with AI, what we have seen is that I think there’s no other technology where diversity is most important because with AI, we need representation of every country. We need every aspect of every country in the culture to be represented in AI because as Deborah said, AI is taking decisions over human beings but not only decisions. It’s also, for example, trying to see what diseases are there to try to cure and for those AI tools to become really accurate and precise, you need the data of those populations that you are trying to help with those tools. So, AI is a tool that needs to be done inside the region where it is transforming the society. It’s a technology that it is really important for it to be inclusive but to be also everybody to feel represented by that AI tool. And in Latin America, we have great engineers and great experts that are doing great AI tools throughout Latin America. We have had this year on March, the first summit of all the AI experts in Latin America in the city of Montevideo in Uruguay that it was called Kipo 2023 and there, the AI community got together and they were really worried about. the developing of these AI tools that were going to transform Latin America, our healthcare systems, our education systems, our public policy, and so on and so forth. But they were really worried about the access to the raw material, to the data, because most of our engineers, most of our developers do not find suitable data to transform and to make new tools. And they have to buy data from European countries, from Asian countries, and, well, you know, it’s not so accurate, the tool that you develop, when the data that you are using is not the data of the people here which you are trying to help. So they did that, and they said that on the final statement of the KIPPO 2023. Also, all these innovative transformation tools that we are developing, most of them are from the states, or they are from private-public partnerships that arise from Latin America, because the public sector has the money and has the support to give to these engineers for the companies to grow, you know. And so we have in Latin America now a lot of start-ups and a lot of new tools that are developed for the healthcare system and for education that come from a public-private sector alliance between developers and what the state can bring in terms of data, but also in terms of resources for these tools to happen. So these alternative technologies that we are developing are having a great struggle, that is to find the raw material to develop and to find the resources to make it happen. The resources are being facilitated by the states, but the data is still a big question mark that we have in the region. Also, we need… Of course, normative frameworks, we need regulation, we need national regulations, and in this case, Latin America is really heterogeneous, it’s not really one region that you can describe with one word, because you have countries like Bolivia and Paraguay, who doesn’t have data protection laws still, they’re still going through those debates. And there’s some other countries like Argentina, Brazil, or Uruguay, which are more advanced on this. And we have more regulations over data and over data flows, and how can you protect privacy, and how you protect technology, and they have incentives for technological programs to happen in the region. But, of course, this is something that needs more time for discussions. For example, in Bolivia and in Paraguay, they’re having really big discussions about their data protection law, and I think it’s important for them to have more time to discuss this in order to get it right, because we’re in a society where privacy is really important, and we all know, and in case of AI, well, it’s in another level of importance. Of course, one thing that I always see in Latin America is that we do not only discuss about privacy. We like to discuss also about second-generation rights, and, for example, about the environment and how AI will impact environment, how it will impact labor rights and collective rights. And it’s because we have a history and we have a tradition of taking care of these issues in Latin America. Latin America has another way of regulating things. There’s an example that I like to give in this case, that it’s an example that we are discussing right now in Argentina about how to manage environmental issues in AI. And it’s the example of a company, you know, these delivery platform workers, companies? Okay, there’s one that is from Colombia. It’s called Rappi, and when Rappi came to the city of Buenos Aires, You could see that the riders were going all around the city all the time. And that’s because the algorithm was set that they didn’t receive a task. A task wasn’t appointed to them unless they were moving around the city. This brought, of course, advertisements for the company because you had to have the backpack in your back and you had to be moving all the time. But this brought, of course, environmental issues because most of our platform workers go in motorcycles. They don’t go by bicycle. And so they were using fuel, they were using the motorcycle, the scooters, without doing anything, without doing a task, without delivering food or delivering anything. Also, it brought dangers and hazards for that person because they were all day in traffic, more exposed to accidents. And so the workers started complaining. They were waiting on a strike. And now the algorithm has changed. And now you can see them, that they’re in the front door of the stores waiting for the task to be appointed to them. And this is an example of how the state could have said, OK, give me your algorithm or explain to me what your algorithm does. And I can assess if it doesn’t have an environmental impact or a social impact in the case of the workers. And once you change these parameters, you can go ahead and do your business. And it doesn’t mean that you’re going to lose money because, actually, they did change it after the strike. So you can see that it was a way of preventing the damage that the workers felt and the environment felt, just checking the algorithm, just checking what the algorithm was doing. And so in our AI laws and the regulation that we are trying to think, we are trying to put these issues on the table. not only about privacy or about how affects human life, but also how affects second generation rights. And I think this is really important. We are in a forum about the environmental, the environment, and I think that we should really take care of how the algorithms work in a way that we don’t even think about that affects the environment also. So, right now the governments are doing a lot with UNESCO and the AI principles that UNESCO did. UNESCO is doing a lot of work with the governments in Latin America to try to put the AI principles in the table that are ethical for the governments to use and to promote. But a lot in regulation is missing. We don’t have actual regulations regulating about AI, but we are moving forward. I think the debate is still really unmature and needs more time, needs more time to develop because we have our engineers working in the tools, we have the governments working in regulations, but the pathways haven’t yet really meet. And we need more time for that relationship to mature in order to get the regulation right. Now, in terms of trade agreements, of course we have a terrorogenicity again, we have a terrorogenic situation throughout Latin America. We have in the Pacific region, we have the TPP already going on with a really aggressive electronic commerce chapter who doesn’t allow to tax data flows, who doesn’t allow to access to data once the data is taken away from the country, which doesn’t allow to access to the source code of the government to ask for the source code to audit it. We have other countries in the region like the Mercosur countries. which do not have quite yet free trade agreements that are as aggressive in electronic commerce as the ones in the Pacific. But of course, we see that the free trade agreements are starting to grow in the region and little by little the countries are starting to sign more and more of them. And sometimes the Ministries of Foreign Affairs do not really talk with the other regulators that are working on AI inside the country. And I find this a little bit struggling because maybe in the future, as Debora was saying, once you sign a free trade agreement, you will find difficulties to regulate inside the country to give these AI tools that we are developing the right framework to operate and not damage the Latin America society. I have been in many meetings with the U.S. Embassy, for example, in the Ministry of Foreign Affairs in Argentina, and they’re all the time saying, well, data flows are great for MSMEs because if data lives well, then we can manage data more good than if the data stays in the country. And I always said, well, it’s not that I want every data, every piece of data to stay in my country, but maybe there’s some of the data like the Australian government did that they said, okay, the healthcare data, I want to keep it on my country to develop AI tools for my own citizens that I always said, it’s not the same to detect a cancer with a database made in China than with a database made in Argentina or in Bolivia because we have a different diet, a different climate, a different way of living, different things. It’s totally different. So sometimes I see this, how they are trying to push the agenda in the international regulations, and I still see a big problem of accessing to the raw material, which is data. Latin American countries are having a great deal with that. And I think that it should be good to leave some policy space in order for governments to say, okay, we have this. public-private developed tools going on that we can manage here. We have the resources to do so and it will help our healthcare system, our education system. Let’s keep the data here so we have access to it and we can manage it in the right way to develop tools for Latin Americans. So what I see to sum up a little bit is that Latin America has a lot of great potential in AI. It can construct alternative technologies and sovereign technologies that can resolve problems in the region, but it needs more policy space to regulate and it’s more time to build those policies, to get the resources and to incentive new forms of technological development. It needs that time and it needs that resources because the dialogues between the public and the private sector is still really immature. So the free trade agreements could put handcuffs to these different issues that I brought to you and that could be a mistake. We need first to harmonize what we understand as AI and what kind of AI tools do we need in the region. So we need to harmonize in the region before harmonizing our regulation with the world because we need first to understand what AI means for Latin America and for the blocks, the economic blocks that are within Latin America and to understand what type of regulations do we need in terms of environmental collective and not only privacy. So also the second generation rights before jumping into a global debate that will restrain what Latin Americans understand what AI will be in the region. This process, it’s really growing. It’s in a really unmatured stage. So what I’m thinking is that we need more time. We need more time. And I know that the world keeps going on and the negotiations keeps going on. And sometimes countries are being pushed to sign agreements that they don’t really actually understand what the consequences in the future they will bring in terms of regulation. So I think this is a hazard that we need to take into account because it can really undermine the efforts of Latin America in building their own tools to understand their own population and have sovereign technologies that can really solve problems in the region. Thank you very much.

Moderator:
Thank you, Sofia, especially for explaining the importance of preserving this policy space and not closing it off before we can really figure out what to do about this problem. You know, you mentioned this gap between the regulators who work on AI and these other technologies versus the trade negotiators signing these agreements. And we’ve seen the same thing in the United States, which even though we’re, you know, the leading proponent of these terms, you know, across the world, now that the domestic agencies that work on these issues are starting to become aware of what the trade negotiators have been doing, there’s a lot of internal conflict. And, you know, we’ve also had more academics and civil society members of Congress raising concern about these issues. And so even within the United States, there’s a growing debate about this. So there may be more bargaining power for other countries to resist that push than it may first seem. So last but not least, I’ll turn to Mariana Trieli. Mariana is the A director at Data Privacy Brazil, a Brazilian think tank working on data protection and other fundamental rights in the face of the emergence of new technologies, social inequalities, and power asymmetries. She is a lawyer with a background in human rights and is currently pursuing a master’s in philosophy of data and digital societies at Tilburg University in the Netherlands. So Mariana, can you help us zoom in with a case study? Where does Brazil stands in the digital trade agenda, particularly in the context of a proposed AI bill.

Mariana Rielli:
Thank you, Melanie and my fellow speakers. Yes, I’m gonna talk a little bit about that, about the digital trade agenda. But first, I would like to say that this panel was proposed with Rebripi. Rebripi is the Brazilian network for the integration of the peoples. That’s the translation to English. And at Data Privacy Brazil, alongside Rebripi, we’ve been trying to kind of bridge the gap between civil society that has been for years now concerned with digital rights and internet governance matters. So Brazil has a long history of successfully regulating internet governance issues since the Brazilian Civil Rights Framework for the Internet in 2014, and then the Brazilian General Data Protection Law in 2018. And that has been done with a lot of multi-stakeholder work and coalitions between civil society, sometimes the private sector. And there is like a very large civil society community around digital rights matters and how those asymmetries that Melanie mentioned, for example, between countries, between regions, the fact that Brazil is a large consumer of technology and not so much a producer of some of those technologies. Those issues are very like at the top of the public agenda, but they’re not like super connected with the discussions on trade and digital trade. And that is for several reasons that were mentioned. how undemocratic those discussions are, and how difficult it is to access those spaces, and how difficult it is to sometimes even know what is happening. So we have been trying as a digital rights organization, and not as an organization that has historically worked with trade, we have been trying to kind of bridge that gap and also learn more. And with that being said, I thought of maybe giving like a brief overview of what has been happening in terms of AI regulation. So Sofia mentioned a little bit about the region as a whole. Brazil has an ongoing process to regulate AI systems, and then talk also a little bit about how Brazil’s current stance on digital trade, kind of how those things interact, and to kind of exemplify some of what has been discussed here. So to start off, for the last two years, Brazil has been working in this process to create a comprehensive legal framework for AI. And the current text that is on the table has been drafted by a committee of experts for about 10 months, and that was with the support of multi-stakeholder groups, and a larger community that participated in several public hearings, and there were many participatory processes to come up with this proposal. And it is a proposal that, at its core, the most important thing was that it should not be like a mere translation of, for example, what is being discussed currently in Europe, because we all know how that… goes with those types of regulation and how sometimes rules that would not necessarily make sense for a particular context are adopted and cannot be enforced. So there was like a very big concern that this would not be the case with AI regulation. And so the text that was proposed by the expert committee was turned into like a bill more recently in May, and it has several provisions on algorithm accountability. And what I think is perhaps a bit different, for example, if you compare it to the EU, is that the Brazil proposal does not adopt a risk-based model. It is a rights-based legislation with, of course, some elements of risk that kind of modulate how those rights need to be materialized and what the governance of AI should look like. But it’s not presented as a risk-based legislation. It has very strong roots on the constitutional provisions that protect fundamental rights in general, but data protection as a fundamental right in Brazil, since it is a fundamental right, and also other provisions that protect autonomy and self-determination. And that is like the baseline for every AI system in relation to every person, human being. So there are, of course, some concerns with the fact that Brazil is a historically very unequal society, and there are like very deep issues with racism and discrimination. And this was something that was taken into account. to account in the way that the legislation was proposed in terms of like defining discrimination and indirect discrimination and how AI systems can either create or enhance discrimination. So I think this is like a very strong feature and very intentional feature of the Brazilian AI proposal that is currently on the table. And of course, I have to say that we are not expecting all of that to go through like Deborah mentioned, the lobbying from like big tech with Brazilian offices but also coming directly from the US and sometimes the EU was very strong with the Data Protection Act. It is already like very present in the discussions right now and we expect like some of those provisions that would be more constraining on big tech and like the private sector in general to be opposed. But still like this is very interesting and I think it goes to show like some of what Sofia has been saying about the region and the efforts to regulate AI. And I think like to go back a little bit to the topic of the panel. The thing is that with the provisions, the provisions on trade agreements that have to do with forbidding the disclosure of a source code and the fact that those definitions are extremely broad and sometimes they would not only encompass algorithms but also APIs and the provisions that intend to create algorithm accountability. They of course take into account the fact that you don’t always need access to source code. There are other ways to provide accountability. and there are levels, and that all depends on like who needs to know that information, if it’s the general public, if it’s experts, if it’s the regulator. So there are layers, and some of those do not require access to code, but still the fact that those trade provisions are so broad that they would maybe reach even access to the interface, for example, that would certainly hinder like the effectiveness of that kind of legislation that is being proposed in Brazil. And what we see is what has been mentioned before, is that there is currently like no, there are no talks between the government officials that are working on the regulation of AI, and the ones that are discussing trade agreements, and of course I have to say Brazil currently is not a party to any of like the mega regional trade agreements that already include those provisions, but like Sofia said, the discussions are kind of heating up, and there is also the discussion on the e-commerce GSI, and Brazil has been becoming more active and commenting more on that. And there is also the fact that after 2016, Brazil’s traditional more defensive stance on those agreements has kind of shifted in favor of a position that is more aligned with the U.S., for example, and like some of those issues, and we are still like there is some uncertainty with regards to what is going to happen from now on, but Brazil has been kind of taking this stance where it is disregarding its own internal regulations and kind of relying on the fact that there are exceptions, and that the exceptions may change. may be enough, and we know that empirically that is not the case. So there is a lot of uncertainty, but the fact, and I think the main focus of the panel is the fact that Brazil is an example of a country that is being very bold in regulating AI and providing for multi-layered provisions for algorithm accountability, but at the same time, its own stance in terms of trade agreements is not compatible with that, and if it was the case that Brazil would sign one of these agreements, that would certainly hinder the probability of any of those provisions being effective, and this is what we are concerned with and what we are looking at, and I think Sofia provided a broader overview of the region and how those things have been heating up, so as civil society, we have been trying to sound the alarm about that, and we also wanted to talk a little bit about what has been happening in Brazil specifically to give you a more concrete example about the main issues discussed in the panel, and I think I’m gonna stop here, because I’ve talked too much.

Moderator:
Thank you. No, not at all. Thank you, Mariana, for delving into Brazil and in the U.S., again, we are also trying to get some regulations in this space, and I think many of our countries are, so thank you to all our speakers. That was a lot of information. I know I have a few questions, but I would love to open it up and see if we have some questions from the crowd. Yes?

Audience:
I have mine of the One Goal Initiative for Governance. I wanted to ask, maybe you could elaborate on the difference and the practical sense between rights-based and risk-based. Are we going to do a round, or can I just…

Moderator:
I think for each one we can just jump in, whichever makes sense. If you’d like to start with Mariana.

Mariana Rielli:
Yeah, okay. Okay, so basically I think the difference is that with risk-based, the actual regulation only starts when there is the identification of a concrete risk, and there is a lot of discussion on, for example, because risk is likelihood times impact, and there are several instances where AI… There are very probable, very likely things that will certainly happen, but the impact is not considered extreme, so those types of applications of AI are not taken into consideration, and the requirements from companies and from whoever is developing and selling those technologies are not as strong or sometimes non-existent. And with rights-based, basically you have a baseline, and you don’t only have requirements from companies to have transparency when there is a certain amount of risk. You have an actual right to transparency for every single case, and the level and the type of requirements and sometimes the duration… So, for example, if you have to provide transparency before, during, or after, that can vary, but there is a baseline, and you can litigate. You have the right to ask for those companies to provide information, and that is like the… baseline for every single AI system, regardless of the risk. So that is, I think, basically the distinction between the two.

Audience:
So in the trade agreements, they are more… the trade agreements are then risk-based?

Mariana Rielli:
No, the AI, for example, the EU AI Act, the proposal, is very much a risk-based regulation, and what I was saying is that in Brazil, the discussion currently, the proposal that is on the table, has those elements of, you know, kind of modulating some of the provisions depending on risk, but it has, like, several chapters that are related to rights. And the trade agreements are, like, a whole different thing.

Speaker 1:
Can I give an example for that? Sure. Just to put it concretely, if you are a worker who is subject to AI in your workplace, which most of us are going to be very soon if we’re not already, because employers are using artificial intelligence to advertise, to decide who to hire, to manage workers, to decide who gets a promotion, a raise, and to fire workers. And if you’re a worker who feels that your rights have been violated, under a risk-based approach, you would then have to have some idea that the AI was faulty, even though you have no access to the AI. You would have to have some idea that it was because of a problem with the AI. You would have to then convince your shop steward. You would have to convince the lawyer to take your case. You would have to file a case and hope that a judicial proceeding would actually start. Then if there’s a judicial proceeding, because the AI Act in Europe, for example, has exceptions for regulators and for judicial proceedings only, but not for shop stewards, not for an individual who is fired, not for an individual whose rights have been violated. Then if you can convince a judicial proceeding to start, then the judiciary would have legal recourse to be able to demand that the company disclose the AI to see if the workers’ rights were violated. But there’s four steps that they would have to go to, and by that time, the AI… probably by the time they win all four steps the AI is no longer being used. Okay the difference with a rights-based approach would be to say the company if they’re going to use AI to fire workers they have to show that that AI is not violating workers rights before they start using it. Like why doesn’t the labor ministry have right to say look before you’re going to use AI that’s going to have an impact on workers rights you have to actually show that it’s not violating their rights and you have to demonstrate that in advance and then you can you know use that. We do that for pharmaceuticals we do that for lots of products where we say you can’t just put anything you want in the market and then we wait and see if it kills people or if it violates their rights or if it means that they don’t get access to a loan or if it means that you know because the company has been buying data from a pharmaceutical like your local pharmacy and then they deny you insurance you know why doesn’t the government then have a right to say well we need to see the AI first to see if your rights are being violated and to make sure that they’re not being violated before it’s used. Like they have access to lots of other you know data and way to get access to information but suddenly now if the company puts it in an AI they say oh well you can’t you can’t see it you can’t see it because not only do they have intellectual property protection then they also have patents they also have trade secrets and now they want a third level of intellectual property protection in the source code. So it’s just a very very lopsided way for them to be able to put those business decisions into and the risk-based approach is for the regulation that she was talking about. There’s no rights-based approach or risk-based approach in trade agreements it’s just a ban. Okay it’s just a ban on source code disclosure with a couple of tiny exceptions for regulators and enforcement.

Moderator:
Thank you very much for those fascinating presentations. I’d just like to ask all the panellists to briefly reflect because it’s such a new issue and it’s been so important. I’d like to hear all of your reflections on the opportunities and challenges particularly on the impact of AI on certain generation rights. Thank you. to start?

Sofia:
Yeah, well, it’s a little bit connected with this risk-based approach or right-based approach that they were talking about. Because what I’m seeing now, and I’m not criticizing the AI Act or anything, because every country and every region has their way of regulating according to their history and their way of regulating things. But what I see is that with AI, it’s something that we are starting to understand. We don’t actually understand, and we don’t actually know or dimension the impact, the great impact, that we’ll have in the future or how we can assess these impacts in the future or imagine the future. It’s really hard to imagine the future. I always said that, for example, in Star Wars, George Lucas was a great person imagining the future, but he couldn’t imagine a future without bottoms. All the computers had these lights going on, and now we don’t have that. Your world has screens. He was a great visionary of the future. Anyways, the thing is that it’s really hard for regulators to do that. It’s really hard, because you cannot regulate the future. You always regulate the past. And this is something that we need to take into account. Whenever we ask regulators, we need regulation on AI. We need it right now, because we need to. Yeah, well, it’s really hard to regulate something that didn’t happen yet and that we can’t imagine yet. You always regulate what already happened to try to prevent the damages. And in second-generation rights, it’s something that is off the table. We are not even discussing this. We are just starting to discuss third-generation rights, like privacy, intimacy, these kind of things, and the risk that poses to societies. But we are not. assessing correctly the way that we’d assess not only individuals, that we’ll assess the whole society. So for example, with collective rights, you can imagine that this will affect workers’ rights, of course. So what type of access or explainability are we going to give to trade unions about how their workers that they represent will be assessed or how they will be managed or how they will be assigned tasks or how it’s not only one issue, it’s a lot of issues that go around how AI is affecting the workspace or the labor universe. And unions do not really understand that now. And so we need to give them the power to defend the labor rights in this new world. And for that, we need some collective rights over the AI. And with collective rights, I mean that the unions should be explained what kind of systems are there and they should assess if those systems are right or not and to defend the interest of the workers. And I’m not saying that unions should access to source code because probably they don’t have the resources to really understand what’s going on there or maybe it’s complicated, especially in low-income countries, but at least explain the right to have the explanation of what is going on and what new technological tools are being introduced in the workspace and how they are affecting. And they should have the right to say, we need this to be changed. Like my example of RAPI, we need this to be changed because this affecting the life and the risk, the labor risk that these workers are having. And that should not be the work of one worker complaining. That should be the work of the union. That’s what it’s there for. Because the worker exposes itself when it goes on a strike, you know, the union needs to be the institution that does that. I think if we go into a bank in a more democratic and a more humanitarian and a more human-centered society, we need to start asking ourselves not only about privacy and about my own rights and my intimacy and individual rights, but also the community rights and what, how these rights are being affected by AI. And this is a debate that it’s it’s so unmature right now. It’s so immature. So I mean, I think that’s that’s the thing, you know, we need more experience on AI in order to regulate. And that’s what we’re not having because we’re running out of time because everybody wants to sign something right now, you know. Anyway.

Mariana Rielli:
Yeah, I can say maybe a few words about that. I agree with Sofia, especially about the thing about the regulation is always looking at the past, but I do think that with the all of the buzz around generative AI and the existential risk of AI, there is like a narrative that kind of purposefully ignores or tries to conceal the fact that there are very like real and concrete examples from the past that have to do with second-generation rights. I think the most striking examples of like AI impact is on housing, it’s on welfare. We have the in the Netherlands, for example, where I live right now, the scandal about the use of automated decision-making, not just AI, but automated decision-making to identify potential fraud with welfare checks and and how the use of that system actually ruined the lives of many people who were considered fraudulent and who lost their livelihood, basically. And this is a very real example, and it’s not far-fetched, and it’s not all the existential risk of AI and robots and everything. It’s just what has been happening. And in Brazil, we have a very large welfare system as well, and it uses automated decision-making in several stages of decision-making, and of course, there’s also human involvement. But those discussions are happening, and I think they need to inform regulation, and they have informed, for example, regulation. The example that I just provided, it is very explicit in the protection of not only first generation, but also second generation rights and collective mechanisms of collective redress, for example, which is something that is really important in the Brazilian legal system as a whole. But there is also the issue of asymmetry, that we have been discussing, of information and power, in the sense that most people have no idea that this is happening. And they cannot trace the effect to the AI system. And I think that’s where the discussion that we are having comes into play, which is how do you trace those very real, concrete effects on people’s lives to the AI systems or the automated decision-making systems. And I think that not only is it necessary to have the appropriate policy-making space, but what we, as a civil society organization, discuss is that this information and this asymmetry must be reduced. so that there is also space for creativity and for imagination, collective imagination, so that those technologies, even if of course small companies and groups will never have access to the kind of resource that is needed to compete with large companies, but that at least there is some level of access and of ownership, so that it’s not just a risk and it’s also an opportunity, because you asked about risks and opportunities, and I think the risks are very concrete, but the opportunities in the region and in Brazil are not as clear, because there is this asymmetry of power and information.

Speaker 1:
I would just start by mentioning two concepts that we need to be thinking about more, that I think a lot of regular people can understand too, you don’t have to be a trade expert, to understand the concept that we need to be using data in the public interest to solve the problems in our society, and right now in the biggest data hoarding countries like the United States, we have all of the investment for who is using data and for what purpose is being driven by the private sector, and what that means is they’re coming up with ways to use data and innovation and AI to make money, but not to solve the world’s problems, and I think if we had the idea that if you produce data, and you produce data, and you produce data, that that data should actually be used collectively to set an agenda based on what your concerns are, and we should have input into then what gets innovated based on the public interest, that a lot of people who don’t know anything about computers are very clear that that’s the way that things should go in the future. And the other issue is digital industrialization. Now, the reason as we mentioned that the biggest corporations have the highest market value, as we said, is because they own the data. Now, if your country doesn’t have access to the data that you produce actually, but big tech has it, then they’re going to be using it to turn around and create products to make money in your country based on the data that you actually gave them for free. And we’re giving away that data for free as individuals, but we’re giving it away collectively as groups, as workers, as communities, as countries, and that is the new digital colonialism. And they call it free flow of data, and they actually sold this whole project as e-commerce for development. It’s the crazy thing. They came up with this idea of e-commerce for development, and they had UNCTAD work on it, and they had all this money that went into creating this concept of e-commerce for development, as if the thing was, if you can have a Bangladeshi home worker make something, and then you can put her product on an Internet platform, and you can have her get paid through PayPal, then all of her problems are solved. And it’s like, that’s not actually how the world works. You can’t market internationally what you don’t produce, and we need to be investing in production, and to have more digital interface be part of our production, we need digital industrialization. And to have digital industrialization, you need data. And you need the data, you need data sovereignty. You need the data that’s produced in your country to actually be good for your country. And how do we know that these things are so key? Because this is what big tech is fighting tooth and nail to convince that free flow of data is actually in the interest of developing countries, because they spend a lot of money going around the world, including telling Europe, and they call it free flow of data. And I’ll never forget the time I heard an African health minister say, it’s not free flow of data. It’s a one-way corporate data transfer, because you’re not talking about flowing data from Europe or from the U.S. back to me, because you put your GDPR. So you have the privacy protections. You won’t let the data come here, but you take all of our data. And why don’t you have an actual free flow of data where instead of just taking all of my health data of my country and creating patented things that you then sell back to me, why don’t you actually allow that free flow and then I can create some new products. You know, I can create some new things and maybe make some money in your country. But of course, that’s not what they’re talking about because it is cross-border data transfers controlled by corporations and it’s a one-way flow. It’s not free flow of data and it’s certainly not e-commerce for development. So I think that those are some of the biggest concerns about economic, social, and cultural rights in the future is that if you don’t actually have control over what is the most valuable raw material that your country produces, which is data, and it’s certainly going to be in the future if it’s not now, then you won’t have the ability to create jobs in the future. You won’t have an ability to get economic benefit from that which you produce in the future if you give away your biggest, most valuable raw material. And that’s just on the data, you know, we’ve talked mostly about source code, but if you don’t have the sovereignty to regulate the computer algorithms and yet you have market access where those corporations have the right to operate in your country because all of these trade agreements, of course, come with the right of corporations to be providing services and all kinds of other things in your country, then you don’t have a right to regulate in the public interest either. And there’s a big push for that now where they’re saying environmental services should be included, you know, the moratorium on, Sophia mentioned the tax issue, okay, big tech right now is trying to push that that moratorium on electronic transmissions that we have now should also include services. Well, that wipes out all of your gaps, flexibilities, so if you’re a country that has kept the policy space to regulate your services, regulate the foreign provision of services in your country, but you suddenly keep service and you say, okay, services are allowed to be part of the moratorium and we’re not allowed to have restrictions on cross-border trade, then you can have all kinds of services coming into your country on a mode one basis cross-border trade that you previously had decided was in your best interest to keep for your local population. So there’s so many myriad issues that we can’t think of, and that’s why there’s so much pressure being put on developing countries. Think about the fact that Europe has been doing agriculture for what, 6,000 years? And they still say that they need subsidies. And they still say that they need tariffs to be able to survive. And yet they’re telling developing countries that don’t even have 100% electrification rate that they need to give up all their policy space for managing digitalization in the future. It’s really quite appalling. It doesn’t need to be done. Ours is flourishing. It’s going on. The internet exists without these rules. And we can take a pause, and we’ll actually see if we take a pause that we don’t actually need the rules that big tech is trying to ram down our throats right now.

Audience

Speech speed

138 words per minute

Speech length

61 words

Speech time

27 secs

Mariana Rielli

Speech speed

141 words per minute

Speech length

2454 words

Speech time

1045 secs

Moderator

Speech speed

181 words per minute

Speech length

540 words

Speech time

179 secs

Sofia

Speech speed

172 words per minute

Speech length

3274 words

Speech time

1144 secs

Speaker 1

Speech speed

218 words per minute

Speech length

1942 words

Speech time

535 secs