WS #41 Big Techs and Journalism: Disputes and Regulatory Models
WS #41 Big Techs and Journalism: Disputes and Regulatory Models
Session at a Glance
Summary
This discussion focused on the complex relationship between big tech companies, digital platforms, and journalism in the modern media landscape. Participants explored various challenges and potential solutions regarding the sustainability of journalism in the digital era.
Key issues discussed included the shift of advertising revenue from traditional media to digital platforms, the impact of AI on content creation and distribution, and the need for fair compensation for journalistic content used by platforms. The speakers debated different regulatory approaches, such as Australia’s news bargaining code and proposals for public sector funds financed by digital platforms.
There was significant discussion about the difficulties in defining “media” and “journalism” in the current information ecosystem, which complicates regulatory efforts. The impact of AI on journalism was a major concern, with participants noting both threats to copyright and potential benefits for content creation.
The speakers disagreed on the role of government in regulating the relationship between tech companies and media. Some argued for stronger regulation to protect journalism, while others cautioned against government involvement, fearing potential threats to free speech.
The discussion also touched on the need for media companies to adapt their business models and build direct relationships with audiences. Participants emphasized the importance of maintaining journalistic ethics and quality in the face of technological disruption.
Overall, the conversation highlighted the complexity of balancing innovation, fair compensation, and the preservation of quality journalism in the digital age. The speakers agreed that ongoing global dialogue and collaboration are necessary to address these challenges effectively.
Keypoints
Major discussion points:
– The impact of digital platforms on journalism revenue and business models
– Regulatory approaches to compensate news organizations for content used by platforms
– Challenges of defining journalism and who should benefit from compensation schemes
– The emerging threat of AI systems using journalistic content without compensation
– The need for media organizations to adapt and innovate their own business models
The overall purpose of the discussion was to explore different perspectives on how to promote journalism sustainability in the digital era, examining regulatory models and alternatives for fair compensation from digital platforms.
The tone of the discussion was thoughtful and analytical, with participants offering nuanced views on complex issues. There was general agreement on the challenges facing journalism, but some disagreement on solutions, particularly around government involvement. The tone became more urgent when discussing AI, reflecting the rapidly evolving nature of that threat.
Speakers
– Bia Barbosa (Moderator)
– Journalist and member of the Brazilian Internet Steering Committee
– Iva Nenadic
– Researcher at the European University Institute Center for Media Pluralism and Media Freedom
– Studies media pluralism in the context of content curation, ranking, and moderation policies of online platforms
– Juliana Harsianti
– Journalist and researcher from Indonesia
– Works on the influence of digital technology in developing countries
– Nikhil Pahwa
– Indian journalist, digital rights activist, and founder of Media Nama
– Key commentator on Indian digital media, censorship, and internet regulation
Additional speakers:
– Eva Nenatic (likely the same person as Iva Nenadic, with name misspelled in transcript)
– Researcher at European University Institute Center for Media Pluralism and Media Freedom
Full session report
The Digital Media Landscape: Challenges and Opportunities for Journalism
This discussion, moderated by Bia Barbosa, a journalist and member of the Brazilian Internet Steering Committee, explored the complex relationship between big tech companies, digital platforms, and journalism in the modern media landscape. Participants from diverse backgrounds, including Iva Nenadic, Juliana Harsianti, and Nikhil Pahwa, examined various challenges and potential solutions regarding the sustainability of journalism in the digital era.
Impact of Digital Platforms on Journalism
The speakers unanimously agreed that digital platforms have significantly disrupted traditional media business models. Nikhil Pahwa, providing perspective from India, noted that platforms both benefit media by driving traffic and compete for advertising revenue. Juliana Harsianti pointed out that small media outlets can use platforms to reach audiences but face sustainability challenges, particularly in the Global South. Iva Nenadic emphasised the tremendous power platforms wield in shaping information systems with little accountability.
Regulatory Approaches and Compensation Models
The discussion revealed divergent views on regulatory approaches to platform-media relationships. Nikhil Pahwa criticised Australia’s news bargaining code, arguing it set a problematic precedent of paying for links. He cautioned against government involvement in media-platform relationships, citing risks to media independence. Instead, Pahwa advocated for regulation focusing on algorithmic accountability and transparency rather than mandating payments.
Iva Nenadic highlighted the Danish model of collective negotiation as a potential alternative. This approach involves media organizations collectively bargaining with platforms, potentially addressing the power imbalance between large tech companies and media outlets, especially smaller ones. Nenadic suggested that this model could be more effective than individual deals or government-mandated payments.
The speakers acknowledged the difficulty in defining “media” and “journalism” in the current information ecosystem, which complicates regulatory efforts.
Emerging Threats and Opportunities from AI
The impact of AI on journalism emerged as a major concern. Bia Barbosa raised copyright issues regarding AI systems using journalistic content to train models without compensation. Nikhil Pahwa warned about AI summaries potentially cannibalising traffic from news sites, disrupting traditional web traffic dynamics. However, he also noted the potential future use of synthetic data by AI models, which could reduce the need for journalistic content in training.
Juliana Harsianti highlighted the use of AI in content creation by journalists in Indonesia, raising ethical concerns about journalistic integrity and the future of the profession. The speakers agreed that AI’s rapid evolution presents both opportunities and threats to journalism, necessitating new regulatory frameworks and ethical guidelines.
Future of Journalism and Media Sustainability
Nikhil Pahwa argued that media organisations need to innovate and develop new business models rather than relying on subsidies or government intervention. He suggested that media companies should protect their rights through legal means when necessary.
Iva Nenadic stressed the importance of journalism demonstrating its value proposition to audiences, particularly in light of declining trust, especially among younger demographics. She emphasized the need for self-reflection within the journalism profession to address these issues and reconnect with younger audiences.
Juliana Harsianti highlighted the unique sustainability challenges faced by small and alternative media outlets in developing countries, where they often rely on donor funding. This underscored the need for diverse solutions that consider regional contexts and the specific needs of smaller media initiatives.
Unresolved Issues and Future Considerations
The discussion left several crucial issues unresolved, including:
1. Effectively regulating AI’s use of journalistic content without stifling innovation
2. Determining fair compensation models for platforms’ use of media content
3. Balancing the need for regulation with concerns about government involvement in media
4. Addressing declining trust in traditional journalism, especially among younger audiences
The speakers suggested potential areas for further exploration, such as:
1. Developing collective bargaining strategies for media coalitions
2. Creating public sector funds financed by digital platforms to support journalism
3. Establishing self-regulatory frameworks within the journalism industry to address ethical concerns around AI use
Conclusion
The discussion highlighted the complexity of balancing innovation, fair compensation, and the preservation of quality journalism in the digital age. While the speakers agreed on the challenges facing journalism, they offered diverse perspectives on potential solutions, particularly regarding government involvement and regulatory approaches.
Bia Barbosa’s closing remarks emphasized the need for balance between big companies, national media companies, and the public interest, suggesting a potential role for the state to play. The conversation underscored the need for ongoing global dialogue and collaboration to address these challenges effectively, considering regional differences and the diverse needs of media outlets of all sizes. As the digital landscape continues to evolve rapidly, ensuring a sustainable future for quality journalism remains a critical global challenge requiring innovative and flexible approaches.
Session Transcript
Bia Barbosa: Okay, thank you. Is that okay? Perfect. Yeah. So, good afternoon, everyone who is here in Saudi Arabia. My name is Bia Barbosa. I’m a journalist and member of the Brazilian Internet Steering Committee. Actually, I’m going to moderate this workshop, and thank you for everyone for coming in the place of Rafael Evangelista, who was supposed to be here but had problems getting into the country because of visa issue, but thank you, everybody, for being here. Thank you for the people that are with us here in the room as well. So welcome to the Big Techs and Journalism Disputes and Regulatory Models Workshop. The idea of today is to have an open debate on what are the alternatives to promote journalism sustainability in the digital era, and what can we learn from regulatory endeavors on the remuneration of journalism by digital platforms across different countries. In a brief introduction, I would like to share with you that the demand for a fair remuneration from digital platforms in favor of journalists or news companies is not new. It’s a tension that has deepened since the prominence of large information platforms and the rise of communication mediated by social media. The exponential growth of digital platforms transformed the digital advertising ecosystem. Their business models based on data collection and analysis for the purpose of targeting advertising has profoundly impacted contemporary journalism, and the systematic shift of revenue from journalism to digital platforms reshaped the landscape of media consumption, production, and distribution. These transformations not only alter the circulation of journalistic content, but also exacerbate power imbalances, potentially widening the gap between those with access to quality, reliable, and diverse information, and those without. This is particularly evident in crises such as those surrounding public health and political electoral communications. At the core of these concerns lies the question of how journalism is compensated by digital platforms, igniting a wave of regulatory proposals across many nations and mobilizing multiple stakeholders. Australia, notably, passed a pioneering legislation addressing this issue. In Canada, the approval of the Online News Act prompted META to remove news from their platforms. This decree has been issued in Indonesia, while South Africa is currently conducting an inquiry on digital platforms markets. In Brazil, from where I come from, since 2021, two proposals have been at the forefront of the debate, the determination in law of the obligation of digital platforms to negotiate with journalism companies and the approval of a public sector fund financed by digital platforms. Although these proposals do not necessarily contradict each other, the idea of a fund is defended as an alternative to the direct bargaining model and not as its complement by many actors. At the international level, regulatory initiatives have been the subject of years of negotiations involving not only the executive and legislative branch, but also the judiciary. In addition to the state actors, a myriad of other actors are taking part in the debates, digital platforms, media companies, researchers, journalists, civil society organizations and international bodies. Last year, the Content and Cultural Goods Chamber of the Brazilian Internet Steering Committee published a study entitled Remuneration of Journalism by Digital Platforms, in which we mapped out five controversies on the subject. The first one is who should benefit? In other words, what should be the scope of any legislation regarding remuneration of journalism by platforms? The trend in legislative proposals has been to create minimal criteria for designating potential beneficiaries, such as the number of employees or media turnover. However, these criteria have been criticized because they potentially exclude individuals or small businesses. For some, journalists themselves should be paid directly, and for others, this is unfeasible. The second controversy is who should pay? The journalism remuneration proposal The proposals we have mapped use a different terminology to define the actor responsible for this remuneration. Digital platforms in Australia, online content sharing services providers in the European Union, platforms and digital news intermediary companies like Canada. In Brazil, the bill on platforms regulation uses the terminology of social media providers, search engines, and instant message services. A third issue, pay for what? The understanding of what journalistic content is changed greatly. For example, in a report published by the Organization for Economic Cooperation and Development in 2021, it defines news as information or commentary on contemporary issues. Explicitly, excluding entertainment news. However, this is a narrow view that can be interpreted from some of the regulatory initiatives analyzed in our report. And in addition, an important part of content made available by media, in which generates high levels of engagement on social media platforms, refers to sports and entertainment. This controversy is also related to the content of voluntary agreements between platforms and journalism companies negotiated without the intermediation of a public authority. The guarantee of confidentiality of these commercial agreements prevents the evaluation of the criteria used to remunerate journalism and its impact. Therefore, there is concern that the use of quantitative criteria, such as the number of publications, will serve as an incentive to reduce the quality of the content produced. The fourth controversy highlighted is related to the demand for more transparency in the work of the platforms, whether in relation to digital advertising revenue or the algorithms used in the content recommendation systems for users. So, remuneration based on what data? And finally, what should the role of the state be? To what extent should the state interfere in relations between journalistic content producers and digital platforms? The Australian Code left a wide margin for these actors to negotiate on their own. However, there is no consensus on whether this is the best model, even considering some specific countries like Brazil, where free negotiation between the parties can result in an even greater concentration of resources and power in a small number of players. The idea of a public sector funded, financed by digital platforms and managed in a participatory way is based on a more proactive and broader vision of the whole of the state. And in this case, decisions about the beneficiaries of the initiative would be part of the construction of public policies to support journalists. So, much to discuss about our workshop session will be divided into three parts. The first will consist of speakers exposing your views and policy experience. The second, the idea is to have a short debate among different perspectives raised by you, by the speakers, and the last one will be devoted to Q&A. I would very much like to talk to our colleagues here in the room and in the online room as well. So, I think we could, I’m going to present to you, not all of you right now, but one at the time that you are going to speak. I think that we could start with Eva Nenatic. Eva studies media pluralism in the context of content curation, ranking, and moderation policies of online platforms, democratic implications such policies may have in related regulatory interventions. At the European University Institute Center for Media Pluralism and Media Freedom. She has been involved in designating and implementation of the Media Pluralism Monitor. So, Eva, thank you very much for being with us. And it would be great if you could present your thoughts on information pluralism online. Thank you very much. You have eight minutes.
Iva Nenadic: Thank you very much for having me. I will be. I will try to stick to eight minutes. I will also try to be maybe a bit briefer so that we have more time for exchange. And indeed, I kind of apologize in advance because my view may be a bit more Eurocentric because this is the main area that we focus on being a center, research center on media pluralism at the European University Institute and running the media pluralism monitor in all your member states and in candidate countries. So candidates for EU membership. But of course we do regularly exchange with our colleagues and our partners in South America, in Australia, in the US and all over the globe to understand basically the focus of our work is on the health of the information system. And the way we understand media pluralism as a concept is perhaps a little bit different than this concept is understood in the US or in Australia or in other parts of the world because when we speak about media pluralism, we don’t speak just about the competition in the market. So market dimension of this, but we are speaking about wider enabling environment for journalism and for media, which are enablers of freedom of expression somehow. So we are looking at fundamental rights protection such as access to information, freedom of expression, both in regulatory framework, but also in practice, the role of relevant authorities, status and safety of journalists, including digital safety is an important aspect of media pluralism as well as social inclusiveness or representativeness of different social groups, not only in media content, but also in management structures and not only of media, but increasingly also of digital companies or big tech, however, whichever terminology we want to use. And then there is this element of political independence or political power. And so our work very much revolves around the concept of power. So the way we approach, we understand media pluralism and the way we regulate somehow in Europe to protect the media pluralism. is to somehow curb or limit opinion centralization or concentration of opinion forming power. And this is how we’ve been doing this for the media world that we had in the past. Of course, we are still not there when it comes to platforms, but I think it’s quite obvious and probably not just from this conversation that the opinion forming power has increasingly shifted from the media. If it’s even still with the media to online platforms or digital platforms or digital intermediaries. So we live in an information environment in which digital platforms largely excluded. So big technology companies largely excluded from liability and accountability actually do have power over shaping our information systems and do have power over the distribution of media and journalistic content that does have. So media, unlike digital platforms do have liability over the content they produce and they place, they publish. So what we are seeing somehow is a profound paradigm shift where, as I said, technology companies are becoming or have become in many instances, especially for certain demographics, the key infrastructures where people actually engage with news and the information that can affect and shape their political opinion. And so they have tremendous power, but very little responsibility in respect to that. And so, but because the focus of our today conversation is on the economic side, somehow the economic implications, I will focus a little bit more on that or this relationship between big tech and journalism in economic terms. But I think it’s really important to emphasize somehow that also in the economic terms, this rise in centrality of platforms has led to disintegration of news production, which is very costly, especially if you think of. and investigative journalism and quality reporting in general from distribution, which is kind of cheap. It’s easy and is cheap nowadays to distribute the content and then benefit or monetize on that. And it’s also disintegrated from advertising because the platforms have positions really as intermediaries between the media or journalism and their audiences and also between the media and advertisers. And we know that traditionally the business model of media or the business model was developed as two-sided market. So providing news to audiences or even charging them through forms of subscription or paying for newspapers and similar and then selling the attention of audiences to advertisers. And now both sides of this market or of this, both sides have been disrupted or controlled somehow, are controlled by the digital platforms or big tech. And in the multi-sided market of big tech companies, the media are just one component of this value chain. So I think this is also something important to keep in mind. And I think we probably, you opened with a relatively strong focus on online platforms, digital platforms, but I think what’s also important to introduce into this conversation is also the role, increasingly relevant role of generative AI companies who are extensively using media content to train their models and to provide, to generate outputs, very often separating the content from the source. So diluting the visibility of media brands, which has an implications again for the economic sustainability of the media. And again, we, in that environment, we as well have negotiations or at least attempts of negotiations or establishing sort of level playing field, which is very difficult to establish, right? Because of tremendous imbalances in power between the tech side and the media. media side. But I think this dimension is also very, very relevant, very important to look at. And two last points I want to make. One is about thinking about the power of big tech in relation to media. So they decide whether they want to carry media content or journalism at all. And we’ve seen, especially with these attempts of regulation, for example, you mentioned the news bargaining code in Australia. You mentioned the initiatives in Brazil, in India, in South Africa, in Canada, in the U.S., especially in California, which is a very interesting case of trying to establish frameworks for negotiation or fair remuneration that should go from big tech to the media. And this is not easy because, again, there is tremendous imbalance in power. And what we’ve seen with the Australian example that is the most advanced one is that now there is a backsliding somehow, because now when there is recently Australia published a revision of the effectiveness somehow of this framework for negotiations that suggests that it’s not as strong enough to kind of ensure sustainability of this approach, because we’ve seen with the major platforms that they’re withdrawing. They don’t want to renegotiate new deals. They don’t want to expand on these deals to include more local media, for example. So it again suggests that the power is still with platforms. The power is still with big tech. And so very often as a response to regulatory intervention, what they do is they either threaten or they just ban news. What we’ve been seeing from them throughout the years is that they are segregating the news in specific tabs, for example, in specific areas on their services, on the services they provide, so that eventually they can just switch it off or shut it off. So the kind of conversation we have in Europe, and the one maybe important point to make is that unlike Australia, that when. with the competition law in Europe, we focused a little bit more on copyright as a basis, as a ground for negotiations between the platforms and the media for fair enumeration. And I think this is also interesting in the conversation, why the conversation around generative AI and how to play this problem or how to deal with this problem in that area. And we’ve seen a lot of issues with this, right? Because first of all, these negotiations, as you already emphasised, are somehow opaque, so we don’t really know what has been negotiated. Who negotiates? In some cases, in some countries, we have individual big publishers negotiating first or negotiating separately, which has implications for media pluralism, because what the big ones negotiate sets somehow the benchmark for the other ones, and then if the big ones are negotiating and excluding the smaller ones, this really can have tremendous consequences for media pluralism or information pluralism more broadly. The big markets, of course, are much better positioned, or big languages are much better positioned to negotiate with big tech than the smaller ones. And the same applies for this tension between the publishers or the media companies and journalists, because there are, as we’ve seen from many examples, they’re not always aligned and they’re not always at the same side, so who should benefit indeed is a big question. The way we understand media and journalism is in a very, we define it in a very broad sense, trying to take into account that there is a plurality of relevant voices, voices of reference in the contemporary information sphere that should be considered at the equal level somehow as journalists, but of course this complicates the situation even further. And I don’t know if I have any minutes left or should I? Yeah, I have. Okay, good. So, basically, the main point I was trying to make is that what we are seeing, what we’ve learned somehow from these initiatives, and mostly focusing on Australia and the copyright directive in Europe, because these two, I think, have the longest experience, even though I mean, they’ve been around for a couple of years, but we can reflect a little bit and look at the effectiveness of these initiatives. I think there are a lot of shortcomings somehow that are surfacing now, that do show that we do not have sufficient instruments to deal with this enormous and even growing power of big tech, that the negotiation power is still on the side of platforms. So, we haven’t really managed to put the media at the same level to be able to negotiate equally. The problems are also on the media side. So, as I said, this fragmentation between the media companies, between media and journalists, between the big and small ones, between big markets, smaller markets, big languages, smaller languages. We do have good examples here. For example, in Denmark, they decided to form a coalition and to negotiate collectively with big tech. And they are really persistent on this. So, and they’re very clear about their conditions and setting their benchmarks high. I think the problem, another problem that we should consider in this conversation is the lack of clear methodology of what’s the value, like what is the value and who should be calculating the value or what is fair remuneration in this context. We have several examples or several cases where this value is calculated in a different way. So, it’s not clear. And of course, it’s not clear from these deals, because these deals, as we said already, are not transparent. And so, what we are seeing increasingly in the policy framework is the shift from these bargaining or negotiation frameworks to something that is a bit more direct, regulatory or policy intervention in this area. So, speaking increasingly about the need, for example, to tackle the issue of the fact that platforms do have this power to decide whether they even want to carry media content or not. In Europe, for example, we have European Media Freedom Act that introduces a precedent somehow by putting forward this principle that media content is not just any other content to be found on the online platform. So they have to pay due regard to this content. And in case they want to moderate this content, they need to follow a special procedure. And I think this speaks a lot in this direction of policy conversations that are suggesting that if these platforms have indeed became key infrastructure for our relationship with news and with media and with informative content more broadly, then maybe we should consider them as public utility. And maybe there should be some must-carry rules in order to make sure that the media and journalism content remains there so that they don’t have power just to remove it. Or we should think of complete alternatives so to break down these dependencies. In terms of bargaining or negotiating frameworks for fair enumeration, there’s been a shift somehow or an intention to shift this conversation, looking at the failure of these negotiation frameworks somehow, or at least their shortcomings, to something that is more direct intervention in terms of digital text or digital levy. But then this opens a new area of questions about how do you then allocate and distribute this money, and especially taking into account that not all the states have all the necessary checks and balances to make sure that these kind of processes are not abused. So I think I said a lot, so I’ll just stop here and look forward to the exchange.
Bia Barbosa: Thank you. Thank you so much, Ivan. And we’re going to for sure have time for this exchange. And you mentioned the impact for small journalistic initiatives. And I think that is a good way to chain with Juliana Razzianti. I don’t know if I pronounced correctly your surname. Because I would like very much to ask you to present your views on the impact of digital platforms on community development and the importance of. of journalism for these communities. So to introduce you, Juliana is a journalist and researcher from Indonesia. She has worked mostly about the influence of digital technology in developing countries, contributing, for example, to global voices and international online media. And I’m sure that from her perspective, they’re much in common from our perspective in Brazil as well. So I give the floor to Juliana. Thank you very much for being with us. I don’t know what time is there in Indonesia, but thank you for being with us.
Juliana Harsianti: Yeah, thank you. Can you hear me? Because, yeah, I’m sorry I cannot turn on the video because this is better for the sound connection. Thank you for the Brazilian internet, the CGBR, to invite me as the speaker to this important issue. Good afternoon. Good afternoon for everyone who is attending in Riyadh. This is almost 9 PM here in Jakarta. But yeah, this is OK to have some discussion with colleagues about the impact of big tech in journalism. As mentioned earlier in opening remarks, Indonesia has in early this year, Indonesia has been published some presidential decree about the regulation for the big tech and digital platform to sharing the revenue with the publisher because the government thinks that the presence of the digital platform in Indonesia has been disruption for the business model, for the media mainstream Indonesia. And it is still on discussion between the tech company and the association journalism and the government in Indonesia whether this decree can be implemented shortly or there’s some modification or some adjustment in future. But this evening I will talk about how the small media mostly take advantage on the digital platform like social digital platform and social media to promote the freedom of press, to spread the information or more variety information in Indonesia. I can give two examples from Indonesia. Magdalene and Project Multatuli both are the online media platform based in Indonesia. Magdalene is more about who focus on gender issue. Meanwhile the Project Multatuli is more focused on the in-depth journalism and highlighted some issue has been avoided by the mainstream media in Indonesia. Why they choose to take advantage of the online platform and the digital social media? Because they think it could reach more audience, they could get more… engagement from the readers but not from the business side because they try to avoid to have some Google ads on their platform. They tried organically to establish their website on Google search engine to keep their site still number one in the search engine. But like Eva said, the small medium media company and our community media has more advantage not to have the big revenue as the big media company. So they can do more freely to promote freedom of expression and multilingual website and can discuss more freely about the issue that has been avoided by the mainstream media and how they get managed the business run. Yes, they have some business model to be running. Most of them get the money from the donor and then from the subscriber, not the new subscription but mostly for the donation for the individual who has been support their platform. and keep the readers who want to get the more quality journalism and then alternative media in Indonesia. I think this is enough for my side, and then back to you.
Bia Barbosa: Thank you very much, Juliana. And for sure that there are other challenges that we will be able to exchange regarding the sustainability of small media initiatives. I think that from the Global South perspective, we still have some other challenge than the Global North has regarding it, because at least from the South America and the Latin America perspective, we face the problem of the concentration of media. In a very few countries, we have a public media that can more or less guarantee some pluralism in the media landscape in general. So I think that there are other challenges that besides the developed countries already have regarding the sustainability of journalism and that we are still facing the last century challenges regarding media pluralism, and then we face the news one regarding all the stress that the new form of production and distribution of content brings to us. So thank you very much for sharing your experience here in Indonesia, and I think that we can move forward with Nikhil Pawar. I would love to hear Nikhil on your studies on the revenue demands from big tech companies and linking them to the legal cases against AI. I think that is the good connection of what Eva brought us at the beginning, relating how the AI systems are using journalist content to train the models and specifically the narrative AI systems, but not only. And so thank you very much for being here with us. I’m going to introduce you, and please feel free to complete any information. So Nikhil is an Indian journalist, digital rights activist, and founder of Media Nama, a mobile and digital news portal. He has been a key commentator on stories and debates around Indian digital media companies, censorship, and intimidation. internet and mobile regulation in India, and of course, studying this demand from big tech companies regarding the journalistic revenue so thank you very much for being with us. You have 10 minutes.
Nikhil Pahwa: Thank you and thank you for inviting me for this very important discussion. I’m a journalist and I’ve been a media entrepreneur from India for about 16 years now, and I’ve been a journalist for 18. I’ve also been blogging for about 21 years. I’m a part of a few key media related committees in India that look at the impact of regulations in media, including the Media Regulation Committee of the Editors Guild of India, and I come at this from an internet perspective having built my entire career on an online platform. We are a small media company, we have about 15 people working at a media organisation. But I also still do believe that journalism is not the exclusive privilege of traditional media or formal journalists. Even today news breaks on social media, and frankly, journalism I see as an act, and therefore people who publish verified content even on social media are also doing journalism. So we can’t really look at things purely from a mainstream media lens, and you know even today there are online news channels and online podcasts that run as media businesses, online media businesses, and they’re just an alternative to traditional media. The primary challenge that media companies and especially traditional media companies face is the shift of advertising revenue from traditional media organisations which had restricted distribution. to digital platforms where now they face infinite competition because everyone can report, everyone can create content. And you know, traditional and big tech companies like Google and Facebook have built business models that rely heavily on data collection and targeted advertising, which has meant that they are competing as aggregators with the media companies on their platforms. But also let us not forget that media companies also compete with all users on the same platforms. So the real challenge for media is of discovery. And you know, but we also have to realise that for media businesses, and I run one, the benefit that these platforms create is that they send us traffic as well. For most media publications, a majority of their traffic comes from search and social media, and they’re the primary source of traffic for many media, many news companies today, including us. What’s also happening, you know, just to cover the complete situation, is that we are facing a new threat with AI summaries. What Google does on its search, especially because unlike traditional search, which used to direct traffic to us, AI summaries potentially cannibalise traffic, they don’t send us traffic anymore. And so Google isn’t just now an aggregator of links, but it is also turning out to become an answers engine. And that is a term which is used also by perplexity, which performs the same function. and similar rag models for AI, basically take facts from news companies and compile them into fresh articles that serve a user’s need. So in fact, a future threat for us and that we will see play out in the next 2 to 3 years is that apps like perplexity which use our content, our use of facts that we report will start cannibalising our traffic. And all media monetises the traffic that they have and they rely on building a relationship with users so that they read them on a regular basis. But really it is important to remember that if we do not get traffic we will not be sustainable. And so while most of this conversation has been focused on getting paid for linking out, I think that is a battle that should not be fought because we actually benefit from search engines and from social media platforms linking out to us. And if it becomes, if you start forcing them to pay and they choose not to link out in order to, which is what Facebook did in Canada, it will actually cost news companies significant revenue because audiences will not discover them. Australia’s news bargaining code as well I feel has set the wrong precedent because we benefit from traffic from social media. Linking out should not be mandatorily paid, it breaks a fundamental foundational principle of the internet where the internet is an interconnection of links, people go from link to link to link and discover new content, new innovation, new things to read. And so I think we should be very careful about forcing platforms to link out because that is a mutually beneficial relationship. The advertising issue is frankly a function of the media not building a direct relationship with its audience, like we have built a direct relationship. relationships with our audience and therefore losing out on monetisation to big tech platforms. Let us not forget platforms like the Guardian chose to sign up with Facebook for its instant articles effectively while they thought they were benefiting for the traffic on Facebook, they were also giving up audience to Facebook. So I think we need to be careful and we need to build our own direct relationships. But I want to talk a lot more about AI because I think that is where it becomes problematic. The tricky thing with AI is that facts are not under copyright and media companies, news reporters like us essentially report facts and there is copyright in how we write things but not copyright in what we write about because facts cannot be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts. So platforms like Perplexity actually take facts from us, piece it together into a news article, they take it from multiple news organisations and they rewrite our content in a manner to be honest which can be much more easy to read and they can also query the same news article on sites like Perplexity which means that a user gets all of their answers based on our reporting on other platforms. Now this is not copyright violation but it is plagiarism and unfortunately plagiarism is not illegal even though copyright violation is. Now most of the cases that are being run, some in the US, some in India, in the US brought out by the New York Times. in India by a news agency called ANI focus on the fact that our content is being used is being taken by AI and ingested by them to train their models and therefore the likelihood of them replicating our work is very high and that they have taken this content without a license. And I think this is an important one because there is not licensing, there is not compensation for using our work to train them and I am aware that many news organizations across the world have actually signed up with AI companies for revenue sharing arrangements. Now this is a very short term perspective and usually AI companies will do exactly what for example Google has done with its Google news initiative and its news showcase where they will tie up with big media companies and this will end up actually ensuring that smaller companies do not get any money. In case of AI that is also what is going to happen. I will give you a small example when we moved our website to a new server our website crashed because of the number of AI bots that were hitting our servers and they were taking our content because they because we moved to a new server they thought this is a new website and so this stealing of our work is I think something that we need to look at from legislation, from codes to address and there needs to be regulation around copyright and AI and the outcome of legal battles happening in the US with New York Times as well as India with ANI is going to set very important legal frameworks for regulators as well and no one wants to talk about copyright. wants to touch the copyright issue because there is a uneasy tension amongst countries that there is a geopolitical battle going on right now about who comes on top in the AI race and they realise that for large language models they need more and more written content and a large and written facts and a large repository of that lies with news organisations. So while today we are trying to fight battles related to linking out which I think is a battle that should not be fought because linking out like I said is a fundamental foundational principle of the internet. The battle that we need to fight and we need to fight it early is the battle to ensure that we get compensated for content being used by AI companies or they need to essentially remove our content from their databases. That is the battle that I see only in courts but not in case of legislators. And these legal frameworks are going to be very, very important to develop because we need to create incentives for reporters to report, for news organisations to publish because let us face the facts, what the content that AI companies generate is based on our work and so if we do not do more original work, if we do not get incentivised to create original work and media companies start dying effectively they will have nothing to build on top of. So I think this is the relationship in terms of revenue relationships that regulation needs to address and like I said multiple times I strongly feel that the idea of paying for links is flawed and what has happened in Canada and what has happened in Australia is the wrong approach. Media companies are companies. as well, they need to figure out mechanisms for monetization and they are moved from an environment of limited competition in traditional media to infinite competition in digital media. And they need to adapt to that change, not try and get pittance from big tech firms. They should be competing with big tech firms. Thank you.
Bia Barbosa: Thank you very much, Nahiri. And I think that you brought us a very challenged perspective regarding, because we didn’t manage so far to solve the challenge related to journalist content used by platforms, by the aggregators, by the news aggregators. And now you’re facing already the AI training systems using journalist content. I would like to take advantage to ask you something. Here in Brazil, there is a bill on artificial intelligence regulation that has just passed at the Senate. We still have the chamber of deputies to move forward and to have the approval of the bill, but it provides the copyright payment for journalist content, use it in trainings and in response for the AI system as well. Do you think that this could be even considering there is a copyright approach that it could be interesting for solving at least this kind of problem that you mentioned? I would like to hear from you a little bit since we are checking and all the perspectives that are on the table in different parts of the world to tackle this issue. No, I think that if it’s legislated, that there needs to be compensation for copyright, for usage of copyright content, that is the correct approach. It’s just that once you agree that there should be compensation, The question becomes who gets compensated and how much do they get compensated? And you know what is the frequency at which they get compensated? Do you for example get paid for an entire data dump being given or do you get compensated on the basis of how it is used? In this case how do you validate that your content is actually being used by AI? You know because even Europe is struggling with algorithmic accountability. And by the way on the linking out part I don’t think, well I have said that there shouldn’t be a revenue share mechanism there. I do believe that we need algorithmic accountability for both social media as well as search to ensure that you know there is no discrimination happening in terms of surfacing our content. And as a small media owner I don’t want someone else like them to benefit big media or traditional media at my expense. So the fairness principles also need to be taken into consideration in the same way that fairness needs to be taken into consideration in case of the law in Brazil. But the question you have to ask is who is media today? How do you identify that this organisation, that you are actually supporting journalism? Because like I said at the beginning journalism is not the exclusive privilege of just journalists today, right? I am a blogger who started a media company. So I understand that bloggers also make money from advertising and to that extent they don’t get compensated. So why should I be as a blogger different from a media company? I am also running my own venture, right? So we are seeing an increase. infinite ability for reporting today because anyone can report. And in that scenario, who gets compensated, who does not becomes even trickier. Who is, if you are scraping a media publication’s blog, I mean sorry a media publication, shouldn’t a blogger also get compensated if it is being scraped for AI is a question. Why or why not? So these are not easy answers. I do not even know if there are answers to some of these questions. But when you are looking at defining laws, you have to create that differentiation. You have to break it up into who benefits and who does not benefit from that regulation. If you look at most podcasters, they are doing opinion journalism in a sense. They are carrying opinions, they are conducting interviews. Would you treat them as journalists under this law as well? So their transcripts, if they are being aggregated by AI, should they be compensated for that as well? Where do you draw the line? And that is the problem with laws, you do not know, it is very tricky to draw the lines in these cases. Yeah, and besides the law, I think that where you, in countries where you do not have a democratic regulator to analyze how these kinds of laws are being implemented, it gives us an even more challenging way to deal with. I do not know if Eva or Juliana want to comment on that or any other aspect. Eva, I would like to ask you if you could comment as well, besides anything else that you wanted to bring us, to ask you to tell us a little bit about this coalition that you mentioned in Denmark, that the media established to negotiate, to collectively negotiate with the platforms, with digital platforms, because one of the issues that we had here in Brazil as well, in the platforms regulation bill, that would if it was approved, now it’s on the Chamber of Deputies, was to compensate, it was not based on copyright, but it was based on content use of journalist content, how to negotiate for this, how it would be possible for the small initiatives to do that. There is already some digital journalism association in Brazil that try to represent the most part of these small initiatives, but they don’t manage to represent all of them, and how this coalition is working on Denmark that you mentioned, I felt that would be interesting to go a little bit deeper, but if you want to dive on this AI topic as well, please feel free.
Iva Nenadic: Thank you. Yeah, I’ll start with the last point. I think Nihil said many super interesting and relevant things. I want to stay for a second with this last point of the complexity we have to define media and journalism today, and this is indeed one of the key obstacles of all the, not only regulatory attempts, but also soft policy measures that we want to implement in this area, because this is the, I mean, it’s the first step, it’s the foundation. Who do we consider as a journalist? Who should benefit from these frameworks and who shouldn’t? How far can we stretch this? We’ve been doing a lot of work within the EU, but also Council of Europe that covers much more countries in Europe, and the Council of Europe has put forward some recommendations on how to define media and journalism in this new information world or information sphere we live in. And it takes a very broad approach, right, because it’s the freedom of expression that is at stake, so it’s one of the key principles. somehow that we nurture in Europe, the fact that the profession should be open and inclusive. And so if this is the principle, how do we solve these practical obstacles? Because we do see a lot of paradoxes of the information systems nowadays, right? The more open the debate somehow is, the more demagoguery, the more misinformation we have. So we have, in a way, we have plurality of voices in the news and information ecosystem, but not all of these voices are actually benefiting our democratic needs, right? Because many of these voices are actually misleading or extremely biased or not professional, not respecting ethical and professional principles. And so also creating a lot of disorder in the information system that confuses people, distorts trust, and has a lot of negative implications for our democratic systems. I can give one example that I’m not saying is a good solution, but maybe is a good starting point to look at on how to solve this problem. And this is something that has been heavily discussed within the negotiations around European Media Freedom Act that does provide this special treatment to media service providers, including journalists, in content moderation by major online platforms. So very large online platforms. We define them as those that have more than 10% of EU population as regular users. So around 45 million of people are using them on a regular basis, monthly users. And so in listing the criteria on, first of all, the law provides a definition which is very broad about media service providers, but listing the criteria on who are the media or journalists that can or should benefit from this special treatment, there is, for the first time in EU law, we have a mention of self-regulation. And we have an explicit reference to the respect of professional standards. So the law, and now I don’t recall exactly the text, but it says that those media who comply to national laws and regulation, but also comply to widely recognized self-regulatory or core regulatory frameworks are entitled to benefit from this. And of course, this can also be misused abuse. You can form an association of journalists that promotes wrong standards and claim that this is widely acknowledged framework if, I don’t know, they have a certain number of media within their umbrella. But I think there is something in that. I think we need to find a way to revive somehow self-regulation, respect of professional standards and ethical principles for different voices in the information sphere. And we can start from traditional journalistic principles, but these, of course, they can also evolve for the new needs. And another thing I think that is useful for this kind of conversation from that example is the transparency of the media who benefit from this. So this was, we were battling heavily somehow to have this clause explicitly mentioned in the legal text. It’s the requirement that the media who benefit, who self-declare as media, are transparent that the list, this list is easily accessible for everyone to read, so to civil society and to academia to make sure that bad actors are not misusing or abusing this legal provision. So I think there is something to look into there. On generative AI, I think this is a very relevant conversation. And again, I would agree with Mikkel that this is a new battlefield somehow. We haven’t resolved the old one. We haven’t resolved the old risks to media pluralism. or the political influences and so on, and even safety issues to journalists. And we’ve moved to the area of digital platforms. So these two battles were fought in parallel. And now we have also generative AI that is profoundly disrupting the information sphere. And I think the biggest change that is happening with generative AI is that we are moving from fragmentation of the public sphere that we had with digital platforms to what we call an audience of one. So this is extreme personalization of interaction between an individual and the content that this individual is exposed to and is generated by these models, these statistical models and systems that we don’t really know how they operate because of course there is a lack of transparency, there is a lack of accountability. We are not really sure what kind of data are they trained. There are a lot of issues with the data that they’re trained off in terms of biases, lack of representativeness and so on. We are seeing, for example, cases such as the Iceland. So the Iceland as a state strategically decided that it’s important for them and for the AI future in which we are entering for the language and their culture to be represented. So they willingly gave all they have in the digital data world for free to open AI, just to be represented in those models because they saw this as a priority. And then on the other hand, unlike the New York Times case where the New York Times is suing OpenAI for the breach of copyright because they use their content without license or without agreement, what we’re seeing in Europe is that the publisher, especially the major ones, such as Le Monde, Axel Springer, El Pais in Spain and similar are making deals with these companies. Deals that are opaque, so we don’t know what these deals are, but for example, the CEO of Le Monde said that it’s a game changing deal for them as one publisher, one media company. But this is probably not the best way forward because it’s fragmenting and weakening the position of publishers and weakening even further position of smaller publishers and journalists and so on. So I think in this context, the Danish model is a very interesting one because they started from, I think it’s a trade union, but I would need to double check this, whether it’s a professional association or a trade union, but it was an existing organization of journalists in the country who decided that the best approach is to go for collective negotiations with Big Tech because this will make them stronger. And they also decided to use all the legal instruments and regulatory frameworks that are in place in Europe to make their position stronger, so to ally somehow with the political power in the country to back them in this fight against Big Tech giants. And we think, of course, this battle is ongoing, there are back and forth. So sometimes they manage to progress and then there is a backlash from Big Tech. So this is a very early stage, very fresh, but it’s, I think, very interesting and relevant case to observe, to see how things can or should be done. Because I do believe that one of the lessons learned from the existing negotiation frameworks was that this fragmentation doesn’t really serve journalists and media. So a collective approach is probably a better one, and we are seeing much more happening on that end. So, you know, news media organizations coming together and finally starting to understand that they are stronger if they do this together. Yeah. Yeah.
Bia Barbosa: Thank you, Eva. And just for the record, I would like to mention that we, as the Brazilian Internet Steering Committee, we tried to invite Google and Meta representatives for this conversation here, but we didn’t manage to convince them to come, which usually happens in different occasions. So I see that Niki and Juliana have raised their hands. I’d just like to check if there’s anyone online asking any questions or not. So, Niki, do you mind if I give the floor to Juliana before?
Nikhil Pahwa: Of course. Please go ahead. I’ve said quite a bit of it.
Juliana Harsianti: Okay. I think our discussion has moved from the digital platform to AI, which has become the major our concern on the journalism in Indonesia. In Indonesia, the generative AI, especially large language model, not only threatened as the copyright, as the Nikhil mentioned and the information, but also threatened the work of journalism itself because the journalists start to generate the news by using the chat GPT, for example, or another large language model, and then they just do some edit for the news and then they publish it on their news sites. This is the problem, not the problem, this is the procedure, this is still on debate on the people who media company and association journalism, because they still think this is good or ethical to have generated the news or publish, or they can use the large language model generative AI to just find the news for the sourcing for the news, and then they have to writing by themselves, and then publish to the as the news article on the media. The problem with the regulation is, yes, I think we need the regulation by the state or the government, but the problem with regulation is still need time to discuss on to produce the regulation by the government. Meanwhile, the the technology is running fast. When the government has the published the regulation on generative AI, maybe we already has the chat GPT for the news, for the news area, which is the has ability more than chat GPT in we know for the moment. Well, what we think we should be done is the association which is not only in journalism, but also in creative media. So the journalist association and then creative people association has joined force to discuss which is that have, they will create the ground regulation, not to as the rule for how to do and how to not to be done by the generative AI for their work. I think this is more based more than ethical than the regulation. And for the moment, they think this is enough, but I think the we need to the more stronger regulation has the law enforcement to overcome the impact of generative AI in journalism and then creative work. So back to you.
Bia Barbosa: Thank you very much, Juliana. Nithya, please.
Nikhil Pahwa: Thanks, I’ll just respond to one thing that Juliana said. While we want strong regulation of AI, I think it’s going to be very difficult to get because geopolitically what’s happening. is that the EU is being looked at too strong a regulatory player and then countries are afraid that they will lose out on innovation and on the AI battle. So, at least in India from what I can see, there is a lot of pressure to not regulate AI. This is what the opposition in the Brazilian parliament as well. If you strong regulate, Absolutely. The other thing to look at is that instead of, just responding to Eva, I think one way of treating, ensuring that media owners get enough compensation is to not get compensation only for media owners. If anybody’s copyrighted content, whether it’s musicians or it’s authors or it’s media owners like us, if our work has been used for training models and we should get compensated. I had a conversation with a lawyer a few months ago who said that AI ingesting our content is like any person reading it because when they are giving an output, it’s not the exact same thing. It’s their understanding of our content. I would actually say that the power law applies over here. The ability of AI to ingest vast amount of our content from across the globe is far greater and so therefore there needs to be protection for creators and that creator could be of any kind, media, movies, books, anything. I would also say that there are other mechanisms where AI does need to be regulated like there has to be regulation for data protection. Eva mentioned bias and I think bias is the trickiest one to regulate because it’s about how one sees the world and perhaps there is plurality of AI systems that needs to be regulated. to come in, in order to ensure that representation is of different kinds, just like bias exists in society. On the New York Times case, I actually, I will be surprised if there is a verdict because we should not forget that New York Times filed a case against OpenAI after negotiations for compensation failed. And I would be surprised if OpenAI does not find a way of compensating New York Times and settling out of court because they would not want a verdict because their content has been ingested by OpenAI. There is one additional challenge that comes in, which is that this could be a systematic usage for research purposes. So AI is trying to position ingesting our content as a mechanism for research. And there can be exceptions in some countries to copyright for research purposes. So this is another challenge that I think that they are faced with. But a fourth thing that is emerging now, over a period of time that I am seeing and I talk to a lot of AI founders, is that the usage of synthetic data, which is data generated by AI itself, is also coming into the mix to the, wherein the future content may not be needed for large language models. Because they are already trained on existing content. In that case, if it is a compensation that we are paid for future uses as well, that may no longer exist. Because let us face it, these are language models. They are not necessarily fact models. Anyone who relies on AI for fact is probably going to get something or the other wrong and it is going to become problematic. So I still feel that media. does have an opportunity in its factual accuracy going into the future, where AI will always fail because its outputs are probabilistic in nature. I know I’m not answering many things because this is still uncharted territory, this is
Bia Barbosa: still evolving as we speak. But we need to take all of these factors into account. Thank you. Of course, and I think that there’s another topic that we didn’t mention today here is that I think that for the journalistic community, it’s interesting to have journalist content training somehow AI systems, otherwise the results of these AI systems are going to bring us are not at least our information that we cannot trust at the end. So it’s important to have journalist content, I think, that being used by AI systems, but I think that the way it’s going to be used in a fair way, in a compensated way, or dealing with copyright issues, but I think that for us, who support the integrity of information online, it’s important to have at least some journalist content being considered as the training of these systems. I see that Eva has raised her hand, we’re just approaching to the final of our session. I would like to ask you to, so I’m going to give you the floor once for each one of you and I just asking you to bring your final comments to this topic. Thank you again for being with us. So we can start with you, Eva, thank you.
Iva Nenadic: Thank you very much. I think it’s probably just the beginning of conversation, but it’s excellent to have this conversation at such a global scale and exchange because I think this is crucial to move us forward to do more of exchanges. like this. I just wanted to maybe, I won’t conclude on anything because it’s very difficult to give final remarks on any of this because it’s all open questions somehow, but I would like to put one more consideration forward. And that’s the fact that we haven’t really seen. So what we see from a lot of surveys is that the trust in journalism is declining. And for example, the latest Reuters news report is suggesting that people see journalists as drivers of polarization. So why is this the case has not been reflected enough within the profession itself. And of course there are multiple reasons for this. And there is also like very strong smear and negative campaigns by politicians against journalists who of course want to disregard or undermine the credibility of the profession because then it works better for them. But I think what we are not seeing sufficiently is this sort of self-reflection. So where have we failed as a profession, especially in this aspect of like reconnecting with youth, with young audiences, because clearly there is a gap there. So young people are departing from the media in traditional sense. They’re departing from journalism in traditional sense and journalists somehow are ignoring this fact. We don’t see enough self-reflection on that side. And then there is also this question of creating value for audiences. I don’t think that media and journalism in traditional sense is investing enough in this. So there is this obsession or a demand somehow that journalism and media should be treated as a public good. And I do strongly support this idea that media and journalism when professional, when ethical are definitely public good and should also be supported by public subsidies in a way that is transparent and fair and contributes media pluralism. But at the same time, there has to be a bit more self-reflection and incentives or initiatives coming from within the profession. And at the moment, what we’re seeing is a lot of complaints, like we are captured by platforms, we are being destroyed by platforms, we need help. But what is actually the value that journalism has to offer to the people has been pushed aside or forgotten a little bit. So I think this would probably be the best case for journalism to kind of revive or remind us of what this value actually is and how can they create value with these new tools and technologies that are on disposal to everyone, including to media and journalism. I think that would make a stronger case for why should people go back to journalism and media and support them more.
Juliana Harsianti: Thank you very much, Eva. Juliana, please, besides it gets 10 o’clock in Jakarta. Oh, yeah, thank you. I think I agree with what Eva said, we cannot make the conclusion for our discussion because this kind of discussion is still to be continued in the future. And then it needs to be a regular conversation, either in a developed country or a developing country in the Global South or Global North. Because it is important for journalists to create the new form in this digital platform, how to deal with the big tech, how to deal with the generative AI and how to create the, to keep the ethic within the journalist in the middle of the influence of digital platform. and the generative AI who has been challenging their work and then the business model of the media company. Yeah, the conversation will be, yeah, the conversation will be impacted to the policy, either to the nation state policy or to within the association, either association journalism and the media company in regional or the nation states. So it will be have the better environment for journalism who keep creating and then keep survive in this digital era.
Bia Barbosa: Thank you. Thank you very much, Juliana. Nikhil, please, your final remarks.
Nikhil Pahwa: Thank you and thank you for having me here. It’s been a great conversation. I’m both a journalist and an entrepreneur and I am a capitalist in how I work, but I do that ethically. I do feel that as media, we have to find our own business models rather than relying on subsidies and government support and anything from the government, to be honest, because anytime, and I feel this strongly, the government comes into a tripartite relationship with government, media and big tech, two things happen. Governments use the funds and it may be different in Europe, but in the global South, governments use funds as a mechanism of influencing the media. And secondly, if the media pushes for them to regulate big tech, then government creates regulations over big tech and uses that as a mechanism to regulate. free speech. So to be honest in this relationship I do not want the government in there because it has an impact on democracy, it has an impact on media freedom whether directly or indirectly whenever you have governments involved. I would rather that we figure out our own business models and if there has to be regulation it has to be applicable across society, not specific to the media, I do not feel we need special treatment and I do not feel that we should have special treatment, we have to adapt as times change, we have to adapt from when we move from traditional business models to online business models, from online to AI but at the same time if someone is stealing our content we need to go to court to protect our rights in a sense. So I strongly believe that I do not want government in the picture and we do not need protection, we need to fight our own battles and we need to innovate on our own. For far too long we have allowed all the innovation to centre around big techs when we have had the same opportunity to build audience relationships and I do not think this is, expecting regulation and laws and policies to support us is going to solve the problem for us. I know this is antithetical to what this conversation has been about but that is the way how I run my media business. Thank you.
Bia Barbosa: And of course one thing that is government and another thing is the state role that we brought at the beginning in our conversation, that is one of the controversies that we had mapped in this report that we published here by the Brazilian Internet Steering Committee. I totally agree with the risk that we have when governments regulate freedom of expression issues or regulate technology that is related to freedom of expression but I also agree that we have to search for some kind of balance between big companies and in countries like mine in Brazil where you have the big national companies, media national companies and the global big techs that the public gets in the middle, the citizens get in the middle and the state has a role to play as well to bring at least more balance to the conversation but of course it is not only governments that can bring this balance, we have the judiciary, we have independent regulatory bodies so there are other alternatives that I think that we have to put on the table to try to find some solutions that respect our specificities in each of the countries that we are discussing this kind of problem but also in a global perspective because we are dealing with global companies and maybe some achievements that we may had in some countries may help us to deal with that. in other realities in from the global South perspective, I think that we can learn a lot from other countries that are tackling this problem. So once again, thank you very much for your time, your insightful thoughts and, and for spending some time with us here at the IGF. To start this conversation, as you mentioned, is it’s only the beginning. And I, from the Brazilian Internet Steering Committee perspective, I would like to thank you very much, and to, to make us available for any kind of further exchange that we might have. And have you all who everybody’s listening or here and for those who are here with us. A good evening. Thank you very much. Bye. Transcribed by https://otter.ai Transcribed by https://otter.ai
Bia Barbosa
Speech speed
137 words per minute
Speech length
3344 words
Speech time
1460 seconds
Platforms have disrupted traditional media business models
Explanation
Digital platforms have transformed the digital advertising ecosystem, impacting contemporary journalism. This has led to a shift in revenue from journalism to digital platforms, reshaping media consumption, production, and distribution.
Evidence
The exponential growth of digital platforms and their business models based on data collection and targeted advertising
Major Discussion Point
Impact of digital platforms on journalism
Agreed with
Nikhil Pahwa
Iva Nenadic
Agreed on
Digital platforms have disrupted traditional media business models
AI systems using journalistic content to train models raises copyright concerns
Explanation
The use of journalistic content to train AI models without compensation raises copyright issues. This practice is being challenged through legal cases in various countries.
Evidence
Legal cases against AI companies by news organizations like the New York Times in the US and ANI in India
Major Discussion Point
Challenges posed by AI to journalism
Maintaining journalistic ethics and quality is crucial amid technological disruption
Explanation
Barbosa emphasizes the importance of maintaining journalistic ethics and quality in the face of technological disruptions. This is crucial for ensuring the integrity of information online.
Major Discussion Point
Future of journalism and media sustainability
Nikhil Pahwa
Speech speed
140 words per minute
Speech length
2726 words
Speech time
1168 seconds
Platforms benefit media by driving traffic, but also compete for advertising
Explanation
Digital platforms like Google and Facebook send traffic to media websites, which is beneficial. However, they also compete with media companies for advertising revenue on their platforms.
Evidence
For most media publications, a majority of their traffic comes from search and social media
Major Discussion Point
Impact of digital platforms on journalism
Agreed with
Bia Barbosa
Iva Nenadic
Agreed on
Digital platforms have disrupted traditional media business models
Australia’s news bargaining code set problematic precedent of paying for links
Explanation
Pahwa argues that forcing platforms to pay for linking out to news content is flawed. He believes linking is a fundamental principle of the internet and mutually beneficial for both platforms and media.
Evidence
The example of Facebook’s response to Canada’s Online News Act, where they removed news from their platform
Major Discussion Point
Regulatory approaches to platform-media relationships
Differed with
Iva Nenadic
Differed on
Approach to platform remuneration
Regulation should focus on algorithmic accountability and transparency, not mandating payments
Explanation
Instead of forcing platforms to pay for linking, Pahwa suggests focusing on algorithmic accountability. This would ensure fairness in how content is surfaced on platforms without discriminating against smaller media outlets.
Major Discussion Point
Regulatory approaches to platform-media relationships
Government involvement in media-platform relationships risks compromising media independence
Explanation
Pahwa expresses concern about government involvement in regulating relationships between media and platforms. He argues this could lead to governments using funds to influence media or using regulations to control free speech.
Evidence
Examples from the Global South where governments use funds to influence media
Major Discussion Point
Regulatory approaches to platform-media relationships
Differed with
Iva Nenadic
Differed on
Role of government regulation
AI summaries threaten to cannibalize traffic from news sites
Explanation
AI-generated summaries, such as those provided by Google’s search results, potentially reduce traffic to news websites. This is because users can get information without clicking through to the original source.
Evidence
Examples of AI tools like Perplexity that compile facts from news sources into fresh articles
Major Discussion Point
Challenges posed by AI to journalism
Agreed with
Juliana Harsianti
Iva Nenadic
Agreed on
AI poses new challenges to journalism
Media need to innovate and develop new business models rather than rely on subsidies
Explanation
Pahwa argues that media companies should focus on developing innovative business models instead of relying on government subsidies or protection. He believes this approach is necessary for maintaining independence and adapting to changing times.
Evidence
His personal experience as a media entrepreneur running a business ethically without relying on government support
Major Discussion Point
Future of journalism and media sustainability
Juliana Harsianti
Speech speed
99 words per minute
Speech length
1063 words
Speech time
640 seconds
Small media can use platforms to reach audiences, but face sustainability challenges
Explanation
Small media outlets in Indonesia use digital platforms to promote freedom of press and reach wider audiences. However, they struggle with sustainability as they avoid relying on advertising revenue from platforms.
Evidence
Examples of Magdalene and Project Multatuli, two online media platforms in Indonesia focusing on gender issues and in-depth journalism respectively
Major Discussion Point
Impact of digital platforms on journalism
Journalists using AI to generate content raises ethical issues
Explanation
In Indonesia, some journalists are using AI tools like ChatGPT to generate news content, which they then edit and publish. This practice raises ethical concerns within the journalism community.
Evidence
Ongoing debate in Indonesia about the ethics of using AI-generated content in news production
Major Discussion Point
Challenges posed by AI to journalism
Agreed with
Nikhil Pahwa
Iva Nenadic
Agreed on
AI poses new challenges to journalism
Small and alternative media face unique sustainability challenges
Explanation
Small and alternative media outlets in developing countries face distinct challenges in maintaining sustainability. They often rely on donor funding and individual donations rather than traditional advertising models.
Evidence
Examples of business models used by small media outlets in Indonesia, such as relying on donations and avoiding Google ads
Major Discussion Point
Future of journalism and media sustainability
Iva Nenadic
Speech speed
158 words per minute
Speech length
4068 words
Speech time
1541 seconds
Platforms have tremendous power over shaping information systems with little accountability
Explanation
Digital platforms have become key infrastructures where people engage with news and information that shape political opinions. However, they have little responsibility or accountability for this power.
Evidence
The shift of opinion-forming power from traditional media to online platforms
Major Discussion Point
Impact of digital platforms on journalism
Agreed with
Bia Barbosa
Nikhil Pahwa
Agreed on
Digital platforms have disrupted traditional media business models
Collective bargaining by media coalitions may be more effective than individual deals
Explanation
Nenadic suggests that media organizations coming together for collective negotiations with big tech companies might be more effective. This approach could strengthen the position of publishers, especially smaller ones.
Evidence
Example of a coalition in Denmark where media organizations are collectively negotiating with digital platforms
Major Discussion Point
Regulatory approaches to platform-media relationships
Differed with
Nikhil Pahwa
Differed on
Approach to platform remuneration
AI’s impact on journalism requires new regulatory frameworks
Explanation
The rise of generative AI is profoundly disrupting the information sphere, moving from fragmentation to extreme personalization. This shift requires new regulatory approaches to address issues of transparency, accountability, and bias in AI systems.
Evidence
Examples of AI companies making opaque deals with major publishers, potentially weakening the position of smaller publishers and journalists
Major Discussion Point
Challenges posed by AI to journalism
Agreed with
Nikhil Pahwa
Juliana Harsianti
Agreed on
AI poses new challenges to journalism
Journalism must demonstrate its value proposition to audiences
Explanation
Nenadic argues that journalism needs to reflect on its role and demonstrate its value to audiences, especially younger ones. This self-reflection is crucial for reconnecting with audiences and justifying support for journalism as a public good.
Evidence
Declining trust in journalism and perception of journalists as drivers of polarization, as reported in the Reuters news report
Major Discussion Point
Future of journalism and media sustainability
Agreements
Agreement Points
Digital platforms have disrupted traditional media business models
Bia Barbosa
Nikhil Pahwa
Iva Nenadic
Platforms have disrupted traditional media business models
Platforms benefit media by driving traffic, but also compete for advertising
Platforms have tremendous power over shaping information systems with little accountability
All speakers agree that digital platforms have significantly impacted traditional media business models, reshaping the landscape of media consumption, production, and distribution.
AI poses new challenges to journalism
Nikhil Pahwa
Juliana Harsianti
Iva Nenadic
AI summaries threaten to cannibalize traffic from news sites
Journalists using AI to generate content raises ethical issues
AI’s impact on journalism requires new regulatory frameworks
The speakers agree that AI technologies, including generative AI and AI-powered summaries, present new challenges to journalism, ranging from ethical concerns to potential traffic loss and the need for new regulatory approaches.
Similar Viewpoints
Both speakers suggest alternative approaches to regulating platform-media relationships, focusing on transparency and collective action rather than mandated payments.
Nikhil Pahwa
Iva Nenadic
Regulation should focus on algorithmic accountability and transparency, not mandating payments
Collective bargaining by media coalitions may be more effective than individual deals
Both speakers emphasize the need for journalism to adapt and demonstrate its value in the changing media landscape, particularly for smaller and alternative media outlets.
Juliana Harsianti
Iva Nenadic
Small and alternative media face unique sustainability challenges
Journalism must demonstrate its value proposition to audiences
Unexpected Consensus
Importance of maintaining journalistic ethics and quality
Bia Barbosa
Iva Nenadic
Juliana Harsianti
Maintaining journalistic ethics and quality is crucial amid technological disruption
Journalism must demonstrate its value proposition to audiences
Journalists using AI to generate content raises ethical issues
Despite differing views on regulation and business models, there was an unexpected consensus on the importance of maintaining journalistic ethics and quality in the face of technological disruptions. This agreement spans across different regional perspectives and approaches to media sustainability.
Overall Assessment
Summary
The main areas of agreement include the disruptive impact of digital platforms on traditional media business models, the challenges posed by AI to journalism, and the importance of maintaining journalistic ethics and quality. There was also some consensus on the need for alternative approaches to regulating platform-media relationships and the importance of journalism demonstrating its value to audiences.
Consensus level
The level of consensus among the speakers was moderate. While there was agreement on the broad challenges facing journalism in the digital age, there were divergent views on specific regulatory approaches and the role of government in addressing these challenges. This implies that while there is a shared understanding of the problems, finding universally accepted solutions remains complex and context-dependent.
Differences
Different Viewpoints
Role of government regulation
Nikhil Pahwa
Iva Nenadic
Government involvement in media-platform relationships risks compromising media independence
Collective bargaining by media coalitions may be more effective than individual deals
Pahwa argues against government involvement in regulating media-platform relationships, citing risks to media independence. Nenadic, however, suggests that collective bargaining supported by regulatory frameworks could be beneficial.
Approach to platform remuneration
Nikhil Pahwa
Iva Nenadic
Australia’s news bargaining code set problematic precedent of paying for links
Collective bargaining by media coalitions may be more effective than individual deals
Pahwa criticizes Australia’s news bargaining code as setting a problematic precedent for paying for links, while Nenadic suggests collective bargaining as a potentially effective approach for fair remuneration.
Unexpected Differences
Sustainability strategies for media
Nikhil Pahwa
Juliana Harsianti
Media need to innovate and develop new business models rather than rely on subsidies
Small and alternative media face unique sustainability challenges
While both discuss media sustainability, Pahwa unexpectedly argues against relying on subsidies, emphasizing innovation, while Harsianti highlights the unique challenges faced by small media outlets in developing countries that often rely on donor funding.
Overall Assessment
summary
The main areas of disagreement revolve around the role of government regulation, approaches to platform remuneration, and strategies for media sustainability.
difference_level
The level of disagreement is moderate, with speakers generally acknowledging similar challenges but proposing different solutions. This reflects the complexity of balancing media independence, economic sustainability, and regulatory approaches in the rapidly evolving digital media landscape.
Partial Agreements
Partial Agreements
Both Pahwa and Nenadic agree on the need for regulation addressing algorithmic accountability and transparency. However, they differ in their approach, with Pahwa focusing on platforms and Nenadic emphasizing the need for new frameworks to address AI’s impact.
Nikhil Pahwa
Iva Nenadic
Regulation should focus on algorithmic accountability and transparency, not mandating payments
AI’s impact on journalism requires new regulatory frameworks
Similar Viewpoints
Both speakers suggest alternative approaches to regulating platform-media relationships, focusing on transparency and collective action rather than mandated payments.
Nikhil Pahwa
Iva Nenadic
Regulation should focus on algorithmic accountability and transparency, not mandating payments
Collective bargaining by media coalitions may be more effective than individual deals
Both speakers emphasize the need for journalism to adapt and demonstrate its value in the changing media landscape, particularly for smaller and alternative media outlets.
Juliana Harsianti
Iva Nenadic
Small and alternative media face unique sustainability challenges
Journalism must demonstrate its value proposition to audiences
Takeaways
Key Takeaways
Digital platforms have significantly disrupted traditional media business models and journalism
There are differing views on regulatory approaches to platform-media relationships, with some favoring government intervention and others opposing it
AI systems pose new challenges for journalism, including copyright concerns and potential cannibalization of traffic
The future sustainability of journalism requires innovation in business models and demonstrating value to audiences
Small and alternative media face unique challenges in the digital landscape
Resolutions and Action Items
None identified
Unresolved Issues
How to effectively regulate AI’s use of journalistic content without stifling innovation
Determining fair compensation models for platforms’ use of media content
Balancing the need for regulation with concerns about government involvement in media
How to define ‘journalism’ and ‘media’ in the digital age for regulatory purposes
Addressing declining trust in traditional journalism, especially among younger audiences
Suggested Compromises
Collective bargaining by media coalitions with platforms instead of individual deals
Focusing regulation on algorithmic accountability and transparency rather than mandating payments
Creating public sector funds financed by digital platforms to support journalism, managed in a participatory way
Developing self-regulatory frameworks within the journalism industry to address ethical concerns around AI use
Thought Provoking Comments
The tricky thing with AI is that facts are not under copyright and media companies, news reporters like us essentially report facts and there is copyright in how we write things but not copyright in what we write about because facts cannot be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts.
speaker
Nikhil Pahwa
reason
This comment highlights a key challenge in regulating AI’s use of journalistic content – the distinction between copyrightable expression and non-copyrightable facts. It introduces complexity to the discussion of how to protect journalistic work in the age of AI.
impact
This led to further discussion about the legal and ethical implications of AI systems using journalistic content, and the challenges of regulating this use.
We are seeing, for example, cases such as the Iceland. So the Iceland as a state strategically decided that it’s important for them and for the AI future in which we are entering for the language and their culture to be represented. So they willingly gave all they have in the digital data world for free to open AI, just to be represented in those models because they saw this as a priority.
speaker
Iva Nenadic
reason
This example introduces a new perspective on the relationship between AI companies and content providers, showing how some entities might willingly provide content to ensure representation.
impact
This comment broadened the discussion beyond just compensation issues to include considerations of cultural representation and diversity in AI training data.
I strongly believe that I do not want government in the picture and we do not need special treatment, we have to adapt as times change, we have to adapt from when we move from traditional business models to online business models, from online to AI but at the same time if someone is stealing our content we need to go to court to protect our rights in a sense.
speaker
Nikhil Pahwa
reason
This comment challenges the prevailing narrative of seeking government intervention and regulation, instead advocating for media companies to adapt and innovate independently.
impact
This perspective shifted the conversation to consider the potential drawbacks of government involvement and the importance of media companies’ own adaptability and innovation.
What’s also happening, you know, just to cover the complete situation, is that we are facing a new threat with AI summaries. What Google does on its search, especially because unlike traditional search, which used to direct traffic to us, AI summaries potentially cannibalise traffic, they don’t send us traffic anymore.
speaker
Nikhil Pahwa
reason
This comment introduces a new dimension to the discussion by highlighting how AI summaries are changing the dynamics of web traffic and potentially threatening media companies’ business models.
impact
This led to further discussion about the evolving challenges faced by media companies in the digital age, beyond just content use and compensation issues.
Overall Assessment
These key comments shaped the discussion by introducing nuanced perspectives on the challenges faced by media companies in the age of AI and digital platforms. They moved the conversation beyond simple issues of compensation to consider broader implications for copyright, cultural representation, business model adaptation, and the role of government regulation. The discussion evolved to encompass a more complex understanding of the interplay between journalism, technology, and regulation in the digital age.
Follow-up Questions
How to define media and journalism in the current digital landscape?
speaker
Eva Nenadic
explanation
This is a foundational issue for developing regulatory frameworks and policies to support journalism in the digital age.
How to ensure fair compensation for content used to train AI models?
speaker
Nikhil Pahwa
explanation
This is crucial for protecting the rights and sustainability of content creators, including journalists, as AI systems increasingly use their work.
How to address the ethical implications of journalists using generative AI to produce news content?
speaker
Juliana Harsianti
explanation
This raises important questions about journalistic integrity and the future of the profession in the age of AI.
How to revive trust in journalism, especially among younger audiences?
speaker
Eva Nenadic
explanation
Addressing declining trust is crucial for the future relevance and sustainability of journalism.
How can media companies innovate and develop sustainable business models in the digital age?
speaker
Nikhil Pahwa
explanation
This is essential for ensuring the long-term viability of journalism without relying on government intervention or subsidies.
How to balance the need for regulation of big tech with protecting free speech and media independence?
speaker
Nikhil Pahwa and Bia Barbosa
explanation
This is a complex issue that requires careful consideration to protect both journalistic freedom and the public interest.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online