UN to train governments in blockchain and AI

The UN Development Programme (UNDP) plans to launch a ‘Government Blockchain Academy’ next year to educate public sector officials on blockchain, AI, and other emerging technologies.

The initiative aims to help governments leverage tech for economic growth and sustainable development.

The academy will partner with the Exponential Science Foundation, a non-profit promoting blockchain and AI. Training will cover financial services, digital IDs, public procurement, smart contracts, and climate finance to help governments boost transparency, inclusion, and resilience.

UNDP officials highlighted that developing countries, including India, Pakistan, and Vietnam, are already among the leading adopters of crypto technology.

The academy will provide in-person and online courses, workshops, and forums to guide high-impact blockchain initiatives aligned with national priorities.

The programme follows last year’s UNDP blockchain academy, created in partnership with the Algorand Foundation, which trained over 22,000 staff members to support sustainable growth projects in participating countries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to benefit from Google’s £5 billion AI plan

Google has unveiled plans to invest £5 billion (around $6.8 billion) in the UK’s AI economy over the next two years.

An announcement comes just hours before US President Donald Trump’s official visit to the country, during which economic agreements worth more than $10 billion are expected.

The investment will include establishing a new AI data centre in Waltham Cross, Hertfordshire, designed to meet growing demand for services like Google Cloud.

Alongside the facility, funds will be channelled into research and development, capital expenditure, engineering, and DeepMind’s work applying AI to science and healthcare. The project is expected to generate 8,250 annual jobs for British companies.

Google also revealed a partnership with Shell to support grid stability and contribute to the UK’s energy transition. The move highlights the economic and environmental stakes tied to AI expansion, as the UK positions itself as a hub for advanced digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alphabet hits US$3 trillion valuation on AI optimism

Google’s parent company, Alphabet, has become the fourth company to reach a market value above US$3 trillion, fuelled by investor confidence in AI and relief over a favourable antitrust ruling.

Its shares jumped 4.3 percent to close at US$251.76 on 15 September, lifting the firm’s valuation to US$3.05 trillion.

The rally has added about US$1.2 trillion in value since April, with Alphabet joining Apple and Microsoft in the elite group while Nvidia remains the most valuable at US$4.25 trillion.

Investor optimism has been strengthened by expectations of a US Federal Reserve rate cut and surging demand for AI-related products.

Alphabet’s communications services unit has risen more than 26 percent in 2025, outpacing all other major sectors. Strong growth in its cloud division, new AI investments, and the Gemini model have reinforced the company’s momentum.

Analysts note that, while search continues to dominate revenues, Alphabet is increasingly viewed as a diversified technology powerhouse with YouTube, Waymo, and AI research at its core.

By avoiding a forced breakup of Chrome and Android, the antitrust ruling also removed a significant threat to its business model.

Market strategists suggest Alphabet now combines the strength of its legacy platforms with the credibility of emerging technologies, securing its place at the centre of Wall Street’s AI-driven rally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lumex chips bring advanced AI to mobile devices

Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.

The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.

Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.

The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!