AI set to guide Japanese political party decisions

A small Japanese political party has announced plans to install an AI system as its leader following its founder’s resignation.

The Path to Rebirth party was created in January by Shinji Ishimaru, a former mayor who rose to prominence after placing second in the 2024 Tokyo gubernatorial election. He stepped down after the party failed to secure seats in this year’s upper house elections.

The AI would oversee internal decisions such as distributing resources, but would not dictate members’ political activities. Okumura, who won a contest to succeed Ishimaru, will act as the nominal leader while supporting the development of the AI.

Despite attracting media attention, the party has faced heavy electoral defeats, with all 42 of its candidates losing in the June Tokyo assembly election and all 10 of its upper house candidates defeated in July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study reveals why humans adapt better than AI

A new interdisciplinary study from Bielefeld University and other leading institutions explores why humans excel at adapting to new situations while AI systems often struggle. Researchers found humans generalise through abstraction and concepts, while AI relies on statistical or rule-based methods.

The study proposes a framework to align human and AI reasoning, defining generalisation, how it works, and how it can be assessed. Experts say differences in generalisation limit AI flexibility and stress the need for human-centred design in medicine, transport, and decision-making.

Researchers collaborated across more than 20 institutions, including Bielefeld, Bamberg, Amsterdam, and Oxford, under the SAIL project. The initiative aims to develop AI systems that are sustainable, transparent, and better able to support human values and decision-making.

Interdisciplinary insights may guide the responsible use of AI in human-AI teams, ensuring machines complement rather than disrupt human judgement.

The findings underline the importance of bridging cognitive science and AI research to foster more adaptable, trustworthy, and human-aligned AI systems capable of tackling complex, real-world challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN to train governments in blockchain and AI

The UN Development Programme (UNDP) plans to launch a ‘Government Blockchain Academy’ next year to educate public sector officials on blockchain, AI, and other emerging technologies.

The initiative aims to help governments leverage tech for economic growth and sustainable development.

The academy will partner with the Exponential Science Foundation, a non-profit promoting blockchain and AI. Training will cover financial services, digital IDs, public procurement, smart contracts, and climate finance to help governments boost transparency, inclusion, and resilience.

UNDP officials highlighted that developing countries, including India, Pakistan, and Vietnam, are already among the leading adopters of crypto technology.

The academy will provide in-person and online courses, workshops, and forums to guide high-impact blockchain initiatives aligned with national priorities.

The programme follows last year’s UNDP blockchain academy, created in partnership with the Algorand Foundation, which trained over 22,000 staff members to support sustainable growth projects in participating countries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to benefit from Google’s £5 billion AI plan

Google has unveiled plans to invest £5 billion (around $6.8 billion) in the UK’s AI economy over the next two years.

An announcement comes just hours before US President Donald Trump’s official visit to the country, during which economic agreements worth more than $10 billion are expected.

The investment will include establishing a new AI data centre in Waltham Cross, Hertfordshire, designed to meet growing demand for services like Google Cloud.

Alongside the facility, funds will be channelled into research and development, capital expenditure, engineering, and DeepMind’s work applying AI to science and healthcare. The project is expected to generate 8,250 annual jobs for British companies.

Google also revealed a partnership with Shell to support grid stability and contribute to the UK’s energy transition. The move highlights the economic and environmental stakes tied to AI expansion, as the UK positions itself as a hub for advanced digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alphabet hits US$3 trillion valuation on AI optimism

Google’s parent company, Alphabet, has become the fourth company to reach a market value above US$3 trillion, fuelled by investor confidence in AI and relief over a favourable antitrust ruling.

Its shares jumped 4.3 percent to close at US$251.76 on 15 September, lifting the firm’s valuation to US$3.05 trillion.

The rally has added about US$1.2 trillion in value since April, with Alphabet joining Apple and Microsoft in the elite group while Nvidia remains the most valuable at US$4.25 trillion.

Investor optimism has been strengthened by expectations of a US Federal Reserve rate cut and surging demand for AI-related products.

Alphabet’s communications services unit has risen more than 26 percent in 2025, outpacing all other major sectors. Strong growth in its cloud division, new AI investments, and the Gemini model have reinforced the company’s momentum.

Analysts note that, while search continues to dominate revenues, Alphabet is increasingly viewed as a diversified technology powerhouse with YouTube, Waymo, and AI research at its core.

By avoiding a forced breakup of Chrome and Android, the antitrust ruling also removed a significant threat to its business model.

Market strategists suggest Alphabet now combines the strength of its legacy platforms with the credibility of emerging technologies, securing its place at the centre of Wall Street’s AI-driven rally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lumex chips bring advanced AI to mobile devices

Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.

The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.

Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.

The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!