Four in five workers believe AI will affect their daily tasks, as companies expand the use of AI chatbots and automation tools in the workplace, according to a new Randstad survey.
Demand for roles requiring ‘AI agent‘ skills has risen by 1,587%, reflecting a shift towards automation in low-complexity and transactional jobs, the recruitment firm said in its annual Workmonitor report.
Randstad surveyed 27,000 workers and 1,225 employers, analysing more than three million job postings across 35 global markets to assess how AI is reshaping labour demand.
Corporate cost-cutting pressures, weakened consumer confidence, and geopolitical uncertainty linked to US trade policies are accelerating workforce restructuring across multiple industries.
Gen Z workers expressed the highest level of concern about AI’s impact, while Baby Boomers reported greater confidence in their ability to adapt, as nearly half of employees said the technology may benefit companies more than workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cisco has deepened its collaboration with OpenAI to embed agentic AI into enterprise software engineering. The approach reflects a broader shift towards treating AI as operational infrastructure rather than an experimental tool.
Integrating Codex into production workflows exposed it to complex, multi-repository, and security-critical environments. Codex operated across interconnected codebases, running autonomous build and testing loops within existing compliance and governance frameworks.
Operational use delivered measurable results. Engineering teams reported faster builds, higher defect-resolution throughput, and quicker framework migrations, cutting work from weeks to days.
Real-world deployment shaped Codex’s enterprise roadmap, especially around compliance, long-running tasks, and pipeline integration. The collaboration will continue as both organisations pursue AI-native engineering at scale, including within Cisco’s Splunk teams.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.
The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.
Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?
ChatGPT, ads, and the integrity of AI responses
Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.
Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.
For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.
Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.
Competitive pressure behind ads in ChatGPT
Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.
Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.
While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.
The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.
ChatGPT advertising and governance challenges
Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.
The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.
Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.
In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.
ChatGPT ads and data governance
In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.
Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.
Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.
Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.
How ChatGPT ads could reshape the AI ecosystem
We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.
One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.
Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.
Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.
ChatGPT advertising and evolving governance frameworks
Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.
For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has appointed two senior industry figures as AI Champions to support safe and effective adoption of AI across financial services, as part of a broader push to boost growth and productivity.
Harriet Rees of Starling Bank and Dr Rohit Dhawan of Lloyds Banking Group will work with firms and regulators to help turn rapid AI uptake into practical delivery. Both will report directly to Lucy Rigby, the Economic Secretary to the Treasury.
AI is already widely deployed across the sector, with around three-quarters of UK financial firms using the technology. Analysis indicates AI could add tens of billions of pounds to financial services by 2030, while improving customer services and reducing costs.
The Champions will focus on accelerating trusted adoption, speeding up innovation, and removing barriers to scale. Their remit includes protecting consumers, supporting financial stability, and strengthening the UK’s role as a global economic and technology hub.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools used for health searches are facing growing scrutiny after reports found that some systems provide incorrect or potentially harmful medical advice. Wider public use of generative AI for health queries raises concerns over how such information is generated and verified.
An investigation by The Guardian found that Google AI Overview has sometimes produced guidance contrary to established medical advice. Attention has also focused on data sources, as platforms like ChatGPT frequently draw on user-generated or openly edited material.
Medical experts warn that unverified or outdated information poses risks, especially where clinical guidance changes rapidly. The European Lung Foundation has stressed that health-related AI outputs should meet the same standards as professional medical sources.
Efforts to counter misinformation are now expanding. The European Respiratory Society and its partners are running campaigns to protect public trust in science and encourage people to verify health information with qualified professionals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Russia’s telecom watchdog is preparing to expand its use of AI to monitor and restrict access to prohibited online content, a move expected to affect parts of the cryptocurrency ecosystem.
Roskomnadzor plans to invest more than 2 billion rubles in machine-learning tools designed to analyse internet traffic and improve enforcement against banned websites and VPN services. Blocking activity has already accelerated, with hundreds of VPNs and more than a million websites restricted during 2025.
Industry observers warn that stronger filtering could disrupt access to foreign-based crypto exchanges, mining pools, and information services. Major platforms are not currently blocked, but wider AI use is expected to accelerate detection of mirror sites and circumvention tools.
Regulatory changes under discussion could further reshape market access. Proposals would allow licensed domestic institutions to handle crypto transactions while imposing separate rules on specialised exchanges, potentially limiting the operations of foreign providers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.
The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.
Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.
While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.
Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.
The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.
A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Global enterprise software provider SAP has entered a strategic collaboration with German healthcare group Fresenius to apply AI and digital technologies to healthcare delivery and clinical operations.
The partnership aims to modernise processes, including patient flow, resource planning, and data-driven decision support, across Fresenius’s hospital networks and care facilities.
At the core of the initiative will be SAP’s AI-enabled enterprise platforms, including analytics, predictive modelling and workflow automation, combined with Fresenius’s clinical expertise to improve operational efficiency, care coordination and patient outcomes.
By leveraging real-time data and AI insights, the collaboration seeks to reduce administrative burden on clinicians while enabling proactive management of capacity and critical resources.
Both organisations emphasise the potential of AI to support clinicians rather than replace them, reinforcing the importance of human oversight, explainability and adherence to healthcare regulations and privacy standards.
The partnership also reflects a broader trend of digital transformation in health systems, where analytics and AI are becoming integral to service delivery and system resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Seoul and Rome have announced plans to deepen cooperation in high-technology sectors, notably AI, semiconductor development and space technology, as part of a broader strategic partnership.
The agreement reflects shared interests in advancing cutting-edge technology and innovation, reinforcing economic and scientific collaboration between South Korea and Italy.
Both countries see these areas as central to future economic competitiveness and technological leadership on the global stage.
While details of specific programmes were not yet disclosed publicly, officials emphasised the mutual benefits of enhanced research partnerships, talent exchange and joint development initiatives that span emerging technologies and advanced industrial sectors.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Taiwan-based electronics manufacturer ASUS has announced that it will not launch new smartphones in 2026, signalling a central strategic pivot away from mobile devices and toward artificial intelligence-driven products and robotics.
Chairman Jonney Shih confirmed at a company event that ASUS will redirect research and development resources previously earmarked for phones into AI hardware such as robotics, AI glasses and commercial PCs.
The move comes amid a hyper-competitive global smartphone market and supply-chain pressures, such as rising memory costs, that make handset manufacturing less attractive than high-growth AI sectors.
ASUS will continue to support existing smartphone users with warranty and software updates, but does not plan to introduce new phone models in the foreseeable future.
Industry observers say this shift reflects broader trends in consumer electronics, where traditional phone makers are seeking growth by leveraging AI innovation and emerging device categories.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!