Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI ambitions with more OpenAI hires

According to a report published by The Information on Sunday, Meta Platforms has hired four additional researchers from OpenAI.

The researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—are set to join Meta’s AI team as part of a broader recruitment drive. All four were previously involved in AI development at OpenAI, the Microsoft-backed company behind ChatGPT and other generative models.

Earlier in the week, The Wall Street Journal reported that Meta had hired three more OpenAI researchers—Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai—based in the firm’s Zurich office.

The hires reflect Meta’s increased investment in advanced AI research, particularly in ‘superintelligence’, a term CEO Mark Zuckerberg has used to describe future AI capabilities.

Meta and OpenAI have not yet responded to requests for comment. Reuters noted that it could not independently verify the hiring details at the time of reporting.

With growing competition among tech giants in AI innovation, Meta’s continued talent acquisition suggests a clear intention to strengthen its internal capabilities through strategic hiring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New NHS plan adds AI to protect patient safety

The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse.

Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections.

Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals.

The initiative forms part of a new 10-year plan to modernise the health service and move it from analogue to digital care.

The technology will send alerts to the Care Quality Commission, whose teams will investigate flagged cases. Professor Meghana Pandit, NHS England’s medical director, said the UK would become the first country to trial this AI-enabled early warning system to improve patient care.

CQC chief Sir Julian Hartley added it would strengthen quality monitoring across services.

However, nursing leaders voiced concerns that AI could distract from more urgent needs. Professor Nicola Ranger of the Royal College of Nursing warned that low staffing levels remain a critical issue.

She stressed that one nurse often handles too many patients, and technology should not replace the essential investment in frontline staff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia insiders sell over $1bn in shares amid AI market boom

Senior Nvidia executives have sold more than $1bn worth of shares over the past year, with over half of those sales taking place in June.

The move comes as Nvidia’s stock soared to record highs, driven by renewed investor enthusiasm for AI. According to the Financial Times, insiders took advantage of the AI-driven rally instead of waiting for further market shifts.

Among those selling shares was Nvidia CEO Jensen Huang, who offloaded stock for the first time since September, as revealed in recent regulatory filings.

The surge in share price helped the company briefly reclaim its title as the world’s most valuable firm, following upbeat forecasts from analysts predicting Nvidia will ride a ‘Golden Wave’ of AI growth.

Nvidia’s stock has recovered more than 60% since early April, when markets were rattled by President Donald Trump’s global tariff plans.

The rebound reflects optimism that upcoming trade negotiations may soften the economic blow and keep momentum behind tech and AI-focused firms.

Nvidia declined to comment on the report.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman reverses his stance on AI hardware as current computers can’t meet the demands

Sam Altman, CEO of OpenAI, has returned from his earlier position, saying that AGI would not need new hardware.

Speaking on a podcast with his brother, Altman said current computers are no longer suited for the fast-evolving demands of AI. Instead of relying on standard hardware, he now believes new solutions are necessary.

OpenAI has already started developing dedicated AI hardware, including potential custom chips, marking a shift from using general-purpose GPUs and servers.

Altman also hinted at a new device — not a wearable, nor a phone — that could serve as an AI companion. Designed to be screen-free and aware of its surroundings, the product is being co-developed with former Apple design chief Jony Ive.

The collaboration, however, has run into legal trouble. A federal judge recently ordered OpenAI and Ive to pause the promotion of the new venture after a trademark dispute with a startup named IYO, which had previously pitched similar ideas to Altman’s investment firm.

OpenAI’s recent $6.5 billion acquisition of io Products, co-founded by Ive, reflects the company’s more profound commitment to reshaping how people interact with AI.

Altman’s revised stance on hardware suggests the era of purpose-built AI devices is no longer a vision but a necessary reality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SoftBank shifts focus to AI and next-generation chips

Masayoshi Son, founder and CEO of SoftBank, has indicated his readiness to pass the leadership baton after decades at the helm. Speaking to shareholders in Tokyo, the 67-year-old entrepreneur said he had mentally prepared to step aside and had already identified internal candidates.

However, he noted that revealing a successor prematurely could affect dynamics within the company.
While succession planning is underway, Son focuses on positioning SoftBank as a global leader in artificial superintelligence (ASI).

The company is pursuing aggressive investments, including a proposed $30 billion stake in OpenAI, the acquisition of UK-based Graphcore, and a potential purchase of US firm Ampere Computing.

Plans are also in motion to build a central tech hub in Arizona, modelled on Shenzhen, featuring advanced chip infrastructure and a possible partnership with TSMC.

SoftBank’s reach extends well beyond the US and Japan. India has invested over $10 billion across 24 companies, including Paytm, Ola Electric, and Swiggy. These ventures have spurred rapid growth and successful IPOs, reinforcing SoftBank’s influence over the country’s digital economy.

Shareholder confidence plays a crucial role in sustaining SoftBank’s bold innovation strategy. Many Japanese retail investors have remained loyal for decades, drawn by Son’s enduring vision and the promise of future breakthroughs.

With AI now firmly at the centre of SoftBank’s roadmap, the company is betting big on a future it hopes to shape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google Doppl, the new AI app, turns outfit photos into try-on videos

Google has unveiled Doppl, a new AI-powered app that lets users create short videos of themselves wearing any outfit they choose.

Instead of relying on imagination or guesswork, Doppl allows people to upload full-body photos and apply outfits seen on social media, thrift shops, or friends, creating animated try-ons that bring static images to life.

The app builds on Google’s earlier virtual try-on tools integrated with its Shopping Graph. Doppl pushes things further by transforming still photos into motion videos, showing how clothes flow and fit in movement.

Users can upload their full-body image or choose an AI model to preview outfits. However, Google warns that the fit and details might not always be accurate at an early stage.

Doppl is currently only available in the US for Android and iOS users aged 18 or older. While Google encourages sharing videos with friends and followers, the tool raises concerns about misuse, such as generating content using photos of others.

Google’s policy requires disclosure if someone impersonates another person, but the company admits that some abuse may occur. To address the issue, Doppl content will include invisible watermarks for tracking.

In its privacy notice, Google confirmed that user uploads and generated videos will be used to improve AI technologies and services. However, data will be anonymised and separated from user accounts before any human review is allowed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch government to build AI plant with €70 million pledge

The Dutch government has pledged €70 million to build a new AI facility in Groningen to establish a European hub for AI research and development.

A consortium of Dutch organisations will manage the plant and focus on healthcare, agriculture, defence and energy applications.

The government is also seeking an additional €70 million in EU co-financing and has welcomed a separate €60 million contribution from the Groningen regional administration.

The plant is expected to be commissioned in 2026 and reach operation by early 2027 if funding is secured.

Minister of Economic Affairs Vincent Karremans emphasised the need to develop domestic AI capacity, warning that dependence on foreign technologies could threaten national competitiveness and digital independence.

‘Those who do not develop the technology themselves depend on others, ’ Karremans said on the government’s website.

European countries have grown increasingly concerned over their reliance on AI technologies developed by US companies.

The Groningen initiative marks a broader effort by the EU to build its own AI infrastructure instead of leaving strategic control in foreign hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!