Google’s Pichai says AI will free coders to focus on creativity

Google CEO Sundar Pichai has said AI is not a threat to human jobs—particularly in engineering—but rather a tool to make work more creative and efficient.

In a recent interview with Lex Fridman, Pichai explained that AI is already powering productivity across Google, contributing to 30% of code generation and improving overall engineering velocity by around 10%.

Far from cutting staff, Pichai confirmed Google plans to hire more engineers in 2025, arguing that AI expands possibilities rather than reducing demand.

‘The opportunity space of what we can do is expanding too,’ he said. ‘It makes coding more fun and frees you up for creativity, problem-solving, and brainstorming.’

Rather than replacing jobs, Pichai sees AI as a companion—handling repetitive tasks and enabling engineers to focus on innovation. He believes this shift will also democratise software development, empowering more people to build and create with code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Odyssey presents immersive AI-powered streaming

Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.

The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.

The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.

Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.

Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.

Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.

Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.

Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic debuts AI tools for US national security

Anthropic has launched a new line of AI models, Claude Gov, explicitly tailored for US national security operations. Built with direct input from government clients, top-tier agencies already use the models.

These classified-use models were developed with enhanced safety testing and are optimised for handling sensitive material, including improved handling of classified data, rare language proficiency, and defence-specific document comprehension.

The Claude Gov models reflect Anthropic’s broader move into government partnerships, building on its collaboration with Palantir and AWS.

As competition in defence-focused AI intensifies, rivals including OpenAI, Meta, and Google are also adapting their models for secure environments.

The sector’s growing interest in custom, security-conscious AI tools marks a shift in how leading labs seek stable revenue streams and deeper ties with government agencies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bjorn Ulvaeus says AI is ‘an extension of your mind’

ABBA legend Bjorn Ulvaeus is working on a new musical with the help of AI, describing the technology as ‘an extension of your mind.’ Despite previously criticising AI companies’ unlicensed use of artists’ work, the 80-year-old Swedish songwriter believes AI can be a valuable creative partner.

At London’s inaugural SXSW, Ulvaeus explained how he uses AI tools to explore lyrical ideas and overcome writer’s block. ‘It is like having another songwriter in the room with a huge reference frame,’ he said.

‘You can prompt a lyric and ask where to go from there. It usually comes out with garbage, but sometimes something in it gives you another idea.’

Ulvaeus was among over 10,000 creatives who signed an open letter warning of the risks AI poses to artists’ rights. Still, he maintains that when used with consent and care, AI can support — not replace — human creativity. ‘It must not exclude the human,’ he warned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in higher education: A mixed blessing for students and institutions

AI rapidly reshapes university life, offering students new tools to boost creativity, structure assignments, and develop ideas more efficiently. At institutions like Oxford Brookes University, students like 22-year-old Sunjaya Phillips have found that AI enhances confidence and productivity when used responsibly, with faculty guidance.

She describes AI as a ‘study buddy’ that transformed her academic experience, especially during creative blocks, where AI-generated prompts saved valuable time. However, the rise of AI in academia also raises important concerns.

A global student survey revealed that while many embrace AI in their studies, a majority fear its long-term implications on employment. Some admit to misusing the technology for dishonest purposes, highlighting the ethical challenges it presents.

Experts like Dr Charlie Simpson from Oxford Brookes caution that relying too heavily on AI to ‘do the thinking’ undermines educational goals and may devalue the learning process.

Despite these concerns, many educators and institutions remain optimistic about AI’s potential—if used wisely. Professor Keiichi Nakata from Henley Business School stresses that AI is not a replacement but a powerful aid, likening its expected workplace relevance to today’s basic IT skills.

He and others argue that responsible AI use could elevate the capabilities of future graduates and reshape degree expectations accordingly. While some students worry about job displacement, others, like Phillips, view AI as a support system rather than a threat.

The consensus among academics is clear: success in the age of AI will depend not on avoiding the technology, but on mastering it with discernment, ethics, and adaptability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Broadcom beats estimates but stock dips after-hours

Broadcom reported strong second-quarter earnings and revenue driven by robust AI demand and solid networking performance.

Despite beating expectations and raising its outlook, the stock fell 3.47% in after-hours trading on Thursday — likely due to profit-taking instead of concern about fundamentals. Shares had previously rallied over 75% since April.

Revenue for the quarter ending May 5 reached US$15 billion, up 20% year-on-year. Adjusted earnings per share were US$1.58, exceeding estimates by two cents.

Net income more than doubled to US$4.97 billion. CEO Hock Tan attributed the strength to growing demand for AI infrastructure and contributions from VMware, which Broadcom acquired in late 2023.

Broadcom forecasted Q3 revenue of approximately US$15.8 billion, slightly above analyst expectations. AI-related revenue is set to increase to US$5.1 billion, up from US$4.4 billion in Q2, fuelled by custom AI accelerators and high-speed networking chips used in hyperscale data centres.

Tan said that the trend should continue through fiscal 2026.

Semiconductor solutions brought in US$8.4 billion in Q2, up 17% from last year, while software revenue rose 25% to US$6.6 billion, with VMware as a key contributor.

About 30% of Broadcom’s AI-related revenue now comes from its switching business, reflecting increasing demand for AI chip clusters. Despite the slight dip in share price, analysts continue to view Broadcom as a key player in AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!