Nvidia ramps up AI push with new Taiwan plans

Nvidia CEO Jensen Huang has urged Taiwan to embrace agentic AI and robotics to tackle its ongoing labour shortage.

Speaking before his departure from Taipei after a week-long visit, Huang said 2025 would be a ‘very exciting’ year for AI, as the technology now possesses the ability to ‘reason’ and carry out step-by-step problem-solving never encountered before.

The new wave of agentic AI, he explained, could assist people with various workplace and everyday tasks.

Huang added that Taiwan, despite being a hub of innovation, faces a lack of manpower. ‘Now with AI and robots, Taiwan can expand its opportunity,’ he said.

He also expressed enthusiasm over the production ramp-up of Blackwell, Nvidia’s latest GPU architecture built for AI workloads, noting that partners across Taiwan are already in full swing.

Huang’s trip included meetings with local partners and a keynote at Computex Taipei, where he unveiled Nvidia’s new Taiwan office and plans for the country’s first large-scale AI supercomputer.

In a TV interview, Huang urged the Taiwanese government to invest more in energy infrastructure to support the growing AI sector. He warned that the energy demands of AI development could exceed 100 megawatts in the near future, stressing that energy availability is the key limitation.

Taiwan’s expanding AI ecosystem — from chip plants to educational institutions — would require substantial support to thrive, he said, pledging to return for Chinese New Year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Silicon Valley fights over AI elite

Silicon Valley’s race to dominate AI has shifted focus from data centres and algorithms to a more human battlefield — elite researchers.

Since the arrival of ChatGPT in late 2022, the competition to attract and retain top AI minds has intensified, with companies offering staggering incentives to a tiny pool of experts.

Startups and tech giants alike are treating recruitment like a high-stakes game of chess. Former OpenAI researcher Ariel Herbert-Voss compared hiring strategies to balancing game pieces: ‘Do I have enough rooks? Enough knights?’

Companies like OpenAI, Google DeepMind, and Elon Musk’s xAI are pulling out all the stops — from private jets to personal calls — to secure researchers whose work can directly shape AI breakthroughs.

OpenAI has reportedly offered multi-million dollar bonuses to deter staff from joining rivals such as SSI, the startup led by former chief scientist Ilya Sutskever. Some retention deals include $2 million in bonuses and equity packages worth $20 million or more, with just a one-year commitment.

Google DeepMind has also joined the race with $20 million annual packages and fast-tracked stock vesting schedules for top researchers.

What makes this talent war so intense is the scarcity of these individuals. Experts estimate that only a few dozen to perhaps a thousand researchers are behind the most crucial advances in large language models.

With high-profile departures, such as OpenAI’s Mira Murati founding a new rival and recruiting 20 colleagues, the fight for AI’s brightest minds shows no signs of slowing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI buys Jony Ive’s AI hardware firm

OpenAI has acquired hardware startup io Products, founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will now join the company as creative head, aiming to craft cutting-edge hardware for the era of generative AI.

The move signals OpenAI’s intention to build its own hardware platform instead of relying on existing ecosystems like Apple’s iOS or Google’s Android. By doing so, the firm plans to fuse its AI technology, including ChatGPT, with original physical products designed entirely in-house.

Jony Ive, the designer behind iconic Apple devices such as the iPhone and iMac, had already been collaborating with OpenAI through his firm LoveFrom for the past two years. Their shared ambition is to create hardware that redefines how people interact with AI.

While exact details remain under wraps, OpenAI CEO Sam Altman and Ive have teased that a prototype is in development, described as potentially ‘the coolest piece of technology the world has ever seen’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta brings back Robert Fergus to lead AI lab

Meta Platforms has brought back Robert Fergus to lead its AI research lab, FAIR, which he helped found in 2014 alongside Yann LeCun. After spending five years as a research director at Google’s DeepMind, Fergus returns to replace Joelle Pineau, who steps down on 30 May.

Fergus, who previously spent six years as a research scientist at Facebook, announced his return on LinkedIn, expressing gratitude to Pineau and reaffirming Meta’s long-term commitment to AI.

FAIR, Meta’s Fundamental AI Research division, focuses on innovations such as voice translation and image recognition to support its open-source Llama language model.

The move comes as Meta ramps up its AI investment, with CEO Mark Zuckerberg allocating up to $65 billion in capital spending for 2025 to expand the company’s AI infrastructure.

AI is now deeply integrated into Meta’s services, including Facebook, Instagram, Messenger, WhatsApp, and a new standalone app meant to rival OpenAI.

By bringing Fergus back instead of appointing a new outsider, Meta signals its intent to build on its existing AI legacy while pushing further toward human-level machine experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK police struggle to contain online misinformation

Sir Andy Cooke has urged that Ofcom be granted stronger powers to swiftly remove harmful online posts, particularly misinformation linked to public unrest. He criticised delays in tackling false content during the 2024 riots, which allowed damaging narratives to spread unchecked.

The UK Online Safety Act, though recently passed, does not permit Ofcom to delete individual posts. Ofcom acknowledged the connection between online posts and the disorder but stated it is responsible for overseeing platforms’ safety systems, not moderating content directly.

Critics argue this leaves a gap in quickly stopping harmful material from spreading. The regulator has faced scrutiny for its perceived lack of action during last summer’s violence. Over 30 people have already been arrested for riot-related posts, with some receiving prison sentences.

Police forces were found to have limited capability to counter online misinformation, according to a new report. Sir Andy stressed the need for improved policing strategies and called for legal changes to deter inflammatory online behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta introduces face recognition to help UAE users recover hacked accounts

Meta is introducing facial recognition tools to help UAE users recover hacked accounts on Facebook and Instagram and stop scams that misuse public figures’ images. The technology compares suspicious ads to verified profile photos and removes them automatically if a match is found.

Well-known individuals in the region are automatically enrolled in the programme but can opt out if they choose. A new video selfie feature has also been rolled out to help users regain access to compromised accounts.

This allows identity verification through a short video matched with existing profile photos, offering a faster and more secure alternative to document-based checks.

Meta confirmed that all facial data used for verification is encrypted, deleted immediately after use, and never repurposed.

The company says this is part of a broader effort to fight impersonation scams and protect both public figures and regular users, not just in the UAE but elsewhere too.

Meta’s regional director highlighted the emotional and financial harm such scams can cause, reinforcing the need for proactive defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft says AI now writes nearly a third of its code

Microsoft CEO Satya Nadella revealed that AI now writes between 20% and 30% of the company’s internal code.

He shared this figure during a fireside conversation with Meta CEO at the recent LlamaCon conference. Nadella added that AI-generated output varies by programming language.

Nadella’s comments came in response to a question from Zuckerberg, who admitted he didn’t know the figure for Meta. Google’s CEO Sundar Pichai recently reported similar figures, saying AI now generates over 30% of Google’s code.

Despite these bold claims, there’s still no industry-wide standard for measuring AI-written code. The ambiguity suggests such figures should be interpreted cautiously. Nevertheless, the trend highlights the growing impact of generative AI on software development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!