Davos hears Fink warn AI could deepen inequality

BlackRock CEO Larry Fink used his Davos speech to put AI at the centre of a broader warning. In the AI era, trust may become the world’s ‘hardest currency.’

Speaking at the World Economic Forum, he argued that new technologies will only strengthen societies if people believe the benefits are real, fairly shared, and not decided solely by a small circle of insiders.

Fink said AI is already showing a familiar pattern. The earliest gains are flowing mainly to those who control the models, data, and infrastructure. He cautioned that without deliberate choices, AI could deepen inequality in advanced economies, echoing the fact that decades of wealth creation after the fall of the Berlin Wall still ended up concentrating prosperity among a narrower share of people than a ‘healthy society’ can sustain.

He also raised a specific fear for the workforce, asking whether AI will do to white-collar jobs what globalisation did to blue-collar work: automate, outsource, and reshape employment faster than institutions can protect workers and communities. That risk, he said, is why leaders need to move beyond slogans and produce a credible plan for broad participation in the gains AI can deliver.

The stakes, Fink argued, go beyond economic statistics. Prosperity should not be judged only by GDP or soaring market values, he said, but by whether people can ‘see it, touch it, and build a future on it’, a test that becomes more urgent as AI changes how value is created and who captures it.

Fink tied the AI debate to the legitimacy crisis facing Davos itself, acknowledging that elite institutions are widely distrusted and that many people most affected by these decisions will never enter the conference. If the WEF wants to shape the next phase of the AI transition, he said, it must rebuild trust by listening outside the usual circles and engaging with communities where the modern economy is actually built.

He also urged a different style of conversation about AI, less staged agreement and more serious disagreement, aimed at understanding. In that spirit, he called for the forum to take its discussions beyond Davos, to places such as Detroit, Dublin, Jakarta and Buenos Aires, arguing that only real dialogue, grounded in lived economic realities, can give AI governance and AI-driven growth the legitimacy to last.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT and the rising pressure to commercialise AI in 2026

The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.

The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.

Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?

ChatGPT, ads, and the integrity of AI responses

Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.

Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.

For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.

Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.

Competitive pressure behind ads in ChatGPT

Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.

Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.

Three runners at a starting line wearing bibs with AI company logos, symbolising competition over advertising and monetisation in AI models, initiated by ChatGPT

While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.

The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.

ChatGPT advertising and governance challenges

Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.

The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.

Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.

In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.

ChatGPT ads and data governance

In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.

Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.

Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.

Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.

How ChatGPT ads could reshape the AI ecosystem

We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.

One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.

Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.

Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.

ChatGPT advertising and evolving governance frameworks

Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.

For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sovereign AI race continues with three finalists in South Korea

South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.

The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.

A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.

Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea faces mounting pressure from US AI chip tariffs

New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.

The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.

The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.

Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.

South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sadiq Khan voices strong concerns over AI job impact

London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.

Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.

Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.

Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.

The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!