ChatGPT and the rising pressure to commercialise AI in 2026

The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.

The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.

Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?

ChatGPT, ads, and the integrity of AI responses

Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.

Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.

For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.

Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.

Competitive pressure behind ads in ChatGPT

Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.

Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.

Three runners at a starting line wearing bibs with AI company logos, symbolising competition over advertising and monetisation in AI models, initiated by ChatGPT

While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.

The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.

ChatGPT advertising and governance challenges

Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.

The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.

Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.

In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.

ChatGPT ads and data governance

In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.

Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.

Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.

Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.

How ChatGPT ads could reshape the AI ecosystem

We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.

One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.

Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.

Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.

ChatGPT advertising and evolving governance frameworks

Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.

For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI travel influencers begin reshaping digital storytelling

India’s first AI-generated travel influencer, Radhika Subramaniam, has begun attracting sustained audience engagement since her launch in mid-2025, signalling growing acceptance of virtual creators in travel content.

Developed by Collective Artists Network, a talent management company based in India, Radhika initially drew attention through curiosity, but followers increasingly interacted with her posts in ways similar to those of human influencers, according to the company’s leadership.

Industry observers say AI travel influencers offer brands greater efficiency, lower production costs, and more control over storytelling, as virtual creators can be deployed without logistical constraints.

Some creators remain sceptical about whether artificial personas can replicate the emotional authenticity and sensory experiences that shape real-world travel storytelling.

Marketing specialists expect AI and human influencers to coexist, with virtual avatars serving as consistent brand voices while human creators retain value through spontaneity, trust, and personal perspective.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

iOS security warnings intensify for older devices

Apple has issued a renewed warning to iPhone users, urging them to install the latest version of iOS to avoid exposure to emerging spyware threats targeting older versions.

Devices running iOS 26 are no longer fully protected by remaining on version 18, even after updating to the latest patch. Apple has indicated that recent attacks exploit vulnerabilities that only the newest operating system can address.

Security agencies in France and the United States recommend regularly powering down smartphones to disrupt certain forms of non-persistent spyware that operate in memory.

A complete shutdown using physical buttons, rather than on-screen controls, is advised as part of a basic security routine, particularly for users who delay major software upgrades.

While restarting alone cannot replace software updates, experts stress that keeping iOS up to date remains the most effective defence against zero-click exploits delivered through everyday apps such as iMessage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strong growth pushes OpenAI past $20 billion in annualised revenue

OpenAI’s annualised revenue has surpassed $20 billion in 2025, up from $6 billion a year earlier. The company’s computing capacity and user numbers have also continued to grow.

The company recently confirmed it will begin showing advertisements in ChatGPT to some users in the United States. The move is part of a broader effort to generate additional revenue to cover the high costs of developing and running advanced AI systems.

OpenAI’s platform now spans text, images, voice, code, and application programming interfaces. CFO Sarah Friar said the next phase of development will focus on agents and workflow automation that can operate continuously, retain context over time, and take action across multiple tools.

Looking ahead to 2026, the company plans to prioritise what it calls ‘practical adoption’, with a particular emphasis on health, science, and enterprise use cases. The aim is to move beyond experimentation and embed AI more deeply into real-world applications.

Friar also said OpenAI intends to maintain a ‘light’ balance sheet by partnering with external providers rather than owning infrastructure outright. Contracts will remain flexible across hardware types and suppliers as the company continues to scale its operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini introduces Answer Now button for faster AI replies

A new ‘Answer Now’ button has been added to Gemini, allowing users to skip extended reasoning and receive instant replies. The feature appears alongside the spinning status indicator in Gemini 3 Pro and Thinking/Flash, but is not available in the Fast model.

When selected, the button confirms that Gemini is ‘skipping in-depth thinking’ before delivering a quicker response. Google says the tool is designed for general questions where speed is prioritised over detailed analysis.

The update coincides with changes to usage limits across subscription plans. AI Pro users now receive 300 Thinking prompts and 100 Pro prompts per day, while AI Ultra users get 1,500 Thinking prompts and 500 Pro prompts daily.

Free users also gain access to the revised limits, listed as ‘Basic access’ for both the Thinking and Pro models. Google has not indicated whether the Fast model will receive the Answer Now feature.

The rollout follows the recent launch of Gemini’s Personal Intelligence feature, which allows the chatbot to draw on Google services such as Gmail and Search history. Google says Answer Now will replace the existing Skip button and is now available on Android, iOS, and the web.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberviolence against women rises across Europe amid deepfake abuse

Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.

Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.

Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.

More than half of European countries report rising cases of non-consensual intimate image sharing, while national data show women forming a clear majority of cyberstalking and online threat victims.

Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.

Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.

Investigations into chatbot-generated images prompted new restrictions, yet women’s rights groups argue that enforcement and prevention still lag behind the scale of online harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini flaw exposed Google Calendar data through hidden prompts

A vulnerability in Google Calendar allowed attackers to bypass privacy controls by embedding hidden instructions in standard calendar invitations. The issue exploited how Gemini interprets natural language when analysing user schedules.

Researchers at Miggo found that malicious prompts could be placed inside event descriptions. When Gemini scanned calendar data to answer routine queries, it unknowingly processed the embedded instructions.

The exploit used indirect prompt injection, a technique in which harmful commands are hidden within legitimate content. The AI model treated the text as trusted context rather than a potential threat.

In the proof-of-concept attack, Gemini was instructed to summarise a user’s private meetings and store the information in a new calendar event. The attacker could then access the data without alerting the victim.

Google confirmed the findings and deployed a fix after responsible disclosure. The case highlights growing security risks linked to how AI systems interpret natural language inputs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!