Binance has applied for a pan-European MiCA licence in Greece, positioning the country as a key regulatory gateway into the EU. The MiCA framework harmonises oversight across member states, enabling licensed firms to operate EU-wide under a single approval.
Contrary to expectations that Malta or Latvia would host the filing, the exchange selected Athens, where it has already established a holding company. The Hellenic Capital Market Commission is reportedly fast-tracking the review with support from leading accounting firms.
Company representatives said the MiCA regime offers legal clarity, regulatory certainty, and a framework that supports responsible innovation. Approval could lead to Binance expanding its corporate presence in Greece, including the opening of new offices and local staffing.
Regulatory urgency is intensifying as the July deadline approaches, particularly for firms operating across multiple EU jurisdictions. A successful application would strengthen Binance’s European strategy, expanding market access and reinforcing regulatory compliance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European policymakers are calling for urgent action to accelerate AI deployment across the EU, particularly among SMEs and scale-ups, as the bloc seeks to strengthen its position in the global AI race.
Backing the European Commission’s Apply AI Strategy, the European Economic and Social Committee said Europe must prioritise trust, reliability, and human-centric design as its core competitive advantages.
The Committee warned that slow implementation, fragmented national approaches, and limited private investment are hampering progress. While the strategy promotes an ‘AI first’ mindset, policymakers stressed the need to balance innovation with strong safeguards for rights and freedoms.
Calls were also made for simpler access to funding, lighter administrative requirements, and stronger regional AI ecosystems. Investment in skills, inclusive governance, and strategic procurement were identified as key pillars for scaling trustworthy AI and strengthening Europe’s digital sovereignty.
Support for frontier AI development was highlighted as essential for reducing reliance on foreign models. Officials argued that building advanced, sovereign AI systems aligned with European values could enable competitive growth across sectors such as healthcare, finance, and industry.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.
The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.
These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.
A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.
Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.
A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hong Kong’s proposed crypto licensing overhaul has drawn criticism from industry leaders, who warn it could disrupt compliant firms and deter blockchain exposure.
Under the proposals, the existing allowance enabling firms to allocate up to 10% of fund assets to crypto without additional licensing would be removed. Even minimal exposure would require a full licence, a move the association called disproportionate and harmful to market experimentation.
Concerns also focused on the absence of transitional arrangements. Without a grace period, firms may be forced to suspend operations while licence applications are reviewed.
The association proposed a six- to 12-month transitional window to allow continued activity during regulatory processing.
Further criticism focused on custody rules restricting client assets to SFC-licensed custodians. Industry representatives warned the measure could limit access to early-stage tokens, restrict Web3 investment, and impose unnecessary geographic constraints.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has unveiled the Digital Networks Act, aiming to reduce fragmentation across the EU telecoms sector. Proposals include limited spectrum harmonisation and an EU-wide numbering scheme to support cross-border business services.
Despite years of debate, the plan stops short of creating a fully unified telecoms market. National governments continue to resist deeper integration, particularly around control of 4G, 5G and wi-fi spectrum management.
The proposal reflects a cautious approach from the European Commission, balancing political pressure for reform against opposition from member states. Longstanding calls for consolidation have struggled to gain consensus.
Commission president Ursula von der Leyen has backed greater market integration, though the latest measures represent an incremental step rather than a structural overhaul.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic chief executive Dario Amodei has criticised the US decision to allow the export of advanced AI chips to China, warning it could undermine national security. Speaking at the World Economic Forum 2026 in Davos, he questioned whether selling US-made hardware abroad strengthens American influence.
Amodei compared the policy to ‘selling nuclear weapons to North Korea‘, arguing that exporting cutting-edge chips risks narrowing the technological gap between the United States and China. He said Washington currently holds a multi-year lead in advanced chipmaking and AI infrastructure.
Sending powerful hardware overseas could accelerate China’s progress faster than expected, Amodei told Bloomberg. He warned that AI development may soon concentrate unprecedented intelligence within data centres controlled by individual states.
Amodei said AI should not be treated like older technologies such as telecoms equipment. While spreading US technology abroad may have made sense in the past, he argued AI carries far greater strategic consequences.
The debate follows recent rule changes allowing some advanced chips, including Nvidia’s H200 and AMD’s MI325X, to be sold to China. The US administration later announced plans for a 25% tariff on AI chip exports, adding uncertainty for US semiconductor firms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European users will soon lose access to Setapp Mobile, an alternative app store created under the EU Digital Markets Act. The service will shut down on 16 February 2026.
MacPaw, a Ukrainian software developer known for Mac productivity tools, launched Setapp as a subscription-based app platform. Its mobile store debuted in 2024 to challenge Apple’s App Store in the EU.
Ongoing uncertainty around Apple’s EU fee structure weakened the business case. The Core Technology Fee and frequent commercial changes made planning and sustainable monetisation difficult.
Setapp’s desktop service will continue operating, while the mobile store is discontinued. Other alternative app stores remain available in the EU, including Epic Games Store and the open source AltStore.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
BlackRock CEO Larry Fink used his Davos speech to put AI at the centre of a broader warning. In the AI era, trust may become the world’s ‘hardest currency.’
Speaking at the World Economic Forum, he argued that new technologies will only strengthen societies if people believe the benefits are real, fairly shared, and not decided solely by a small circle of insiders.
Fink said AI is already showing a familiar pattern. The earliest gains are flowing mainly to those who control the models, data, and infrastructure. He cautioned that without deliberate choices, AI could deepen inequality in advanced economies, echoing the fact that decades of wealth creation after the fall of the Berlin Wall still ended up concentrating prosperity among a narrower share of people than a ‘healthy society’ can sustain.
He also raised a specific fear for the workforce, asking whether AI will do to white-collar jobs what globalisation did to blue-collar work: automate, outsource, and reshape employment faster than institutions can protect workers and communities. That risk, he said, is why leaders need to move beyond slogans and produce a credible plan for broad participation in the gains AI can deliver.
The stakes, Fink argued, go beyond economic statistics. Prosperity should not be judged only by GDP or soaring market values, he said, but by whether people can ‘see it, touch it, and build a future on it’, a test that becomes more urgent as AI changes how value is created and who captures it.
Fink tied the AI debate to the legitimacy crisis facing Davos itself, acknowledging that elite institutions are widely distrusted and that many people most affected by these decisions will never enter the conference. If the WEF wants to shape the next phase of the AI transition, he said, it must rebuild trust by listening outside the usual circles and engaging with communities where the modern economy is actually built.
He also urged a different style of conversation about AI, less staged agreement and more serious disagreement, aimed at understanding. In that spirit, he called for the forum to take its discussions beyond Davos, to places such as Detroit, Dublin, Jakarta and Buenos Aires, arguing that only real dialogue, grounded in lived economic realities, can give AI governance and AI-driven growth the legitimacy to last.
Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.
The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.
Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?
ChatGPT, ads, and the integrity of AI responses
Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.
Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.
For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.
Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.
Competitive pressure behind ads in ChatGPT
Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.
Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.
While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.
The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.
ChatGPT advertising and governance challenges
Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.
The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.
Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.
In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.
ChatGPT ads and data governance
In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.
Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.
Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.
Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.
How ChatGPT ads could reshape the AI ecosystem
We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.
One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.
Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.
Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.
ChatGPT advertising and evolving governance frameworks
Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.
For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.
The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.
A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.
Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!