AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.
Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.
AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.
Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.
Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.
Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.
Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.
Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
One Medicalhas launched a Health AI assistant in its mobile app, offering personalised health guidance at any time. The tool uses verified medical records to support everyday healthcare decisions.
Patients can use the assistant to explain lab results, manage prescriptions, and book virtual or in-person appointments. Clinical safeguards ensure users are referred to human clinicians when medical judgement is required.
Powered by Amazon Bedrock, the assistant operates under HIPAA-compliant privacy standards and avoids selling personal health data. Amazon says clinician and member feedback will shape future updates.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Shri Kashi Vishwanath Temple in India has launched an AI-powered chatbot to help devotees access services from anywhere in the world. The tool provides quick information on rituals, bookings, and temple timings.
Devotees can now book darshan, special aartis, and order prasad online. The chatbot also guides pilgrims on guesthouse availability and directions around Varanasi.
Supporting Hindi, English, and regional languages, the AI ensures smooth communication for global visitors. The initiative aims to simplify temple visits, especially during festivals and crowded periods.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Advanced language models have demonstrated the ability to generate working exploits for previously unknown software vulnerabilities. Security researcher Sean Heelan tested two systems built on GPT-5.2 and Opus 4.5 by challenging them to exploit a zero-day flaw in the QuickJS JavaScript interpreter.
Across multiple scenarios with varying security protections, GPT-5.2 completed every task, while Opus 4.5 failed only 2. The systems produced more than 40 functional exploits, ranging from basic shell access to complex file-writing operations that bypassed modern defences.
Most challenges were solved in under an hour, with standard attempts costing around $30. Even the most complex exploit, which bypassed protections such as address space layout randomisation, non-executable memory, and seccomp sandboxing, was completed in just over three hours for roughly $50.
The most advanced task required GPT-5.2 to write a specific string to a protected file path without access to operating system functions. The model achieved this by chaining seven function calls through the glibc exit handler mechanism, bypassing shadow stack protections.
The findings suggest exploit development may increasingly depend on computational resources rather than human expertise. While QuickJS is less complex than browsers such as Chrome or Firefox, the approach demonstrated could scale to larger and more secure software environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ChatGPT earned full marks in nine subjects during this year’s unified university entrance examinations in Japan. LifePrompt Inc reported that the AI achieved 97 percent accuracy across 15 subjects overall.
The subjects with perfect scores included mathematics, chemistry, informatics, and politics and economy. Performance was lower in Japanese language, where ChatGPT scored 90 percent, reflecting challenges with processing complex text.
Tests were conducted without access to the internet, with the AI relying solely on pre-stored data. Results show that ChatGPT has steadily improved since 2024, outperforming scores required for competitive programmes such as Human Sciences I at the University of Tokyo.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Authorities in Russia are increasing pressure on WhatsApp, one of the country’s most widely used messaging platforms. The service remains popular despite years of tightening digital censorship.
Officials argue that WhatsApp refuses to comply with national laws on data storage and cooperation with law enforcement. Meta has no legal presence in Russia and continues to reject requests for user information.
State backed alternatives such as the national messenger Max are being promoted through institutional pressure. Critics warn that restricting WhatsApp targets private communication rather than crime or security threats.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.
The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.
Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?
ChatGPT, ads, and the integrity of AI responses
Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.
Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.
For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.
Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.
Competitive pressure behind ads in ChatGPT
Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.
Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.
While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.
The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.
ChatGPT advertising and governance challenges
Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.
The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.
Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.
In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.
ChatGPT ads and data governance
In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.
Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.
Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.
Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.
How ChatGPT ads could reshape the AI ecosystem
We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.
One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.
Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.
Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.
ChatGPT advertising and evolving governance frameworks
Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.
For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.
The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.
A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.
Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Scientists in Chinadeveloped an error-aware probabilistic update (EaPU) to improve neural network training on memristor hardware. The method tackles accuracy and stability limits in analog computing.
Training inefficiency caused by noisy weight updates has slowed progress beyond inference tasks. EaPU applies probabilistic, threshold-based updates that preserve learning and sharply reduce write operations.
Experiments and simulations show major gains in energy efficiency, accuracy and device lifespan across vision models. Results suggest broader potential for sustainable AI training using emerging memory technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!