Researchers at the Hong Kong University of Science and Technology have unveiled a pioneering AI model called MOME for non-invasive breast cancer diagnosis.
Using China’s largest multiparametric MRI breast cancer dataset, MOME performs at a level comparable to seasoned radiologists and is currently undergoing clinical trials in more than ten hospitals.
Among the institutions participating in the validation phase are Shenzhen People’s Hospital, Guangzhou First Municipal People’s Hospital, and Yunnan Cancer Center. Early results show that MOME excels in predicting response to pre-surgical chemotherapy.
The development highlights the region’s growing capabilities in medtech innovation and could reshape diagnostic strategies for breast cancer across Asia. MOME’s clinical success may also pave the way for similar AI-led models in oncology.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Samsung Electronics is testing a new open-source AI coding assistant called Cline, which is expected to be adopted by its Device eXperience (DX) division as early as next month, according to Yonhap News Agency.
Cline leverages Claude 3.7 Sonnet’s advanced agentic coding capabilities to autonomously handle complex software development tasks. The goal is to significantly boost developer productivity across Samsung’s mobile and home appliance units, which are both part of the DX division.
The move aligns with Samsung’s broader AI for All strategy. Last month, the company created a dedicated AI productivity innovation group within the DX division.
This follows the establishment of an AI centre within its chip business in December 2024, further underscoring the tech giant’s commitment to embedding AI across its operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google CEO Sundar Pichai has said AI is not a threat to human jobs—particularly in engineering—but rather a tool to make work more creative and efficient.
In a recent interview with Lex Fridman, Pichai explained that AI is already powering productivity across Google, contributing to 30% of code generation and improving overall engineering velocity by around 10%.
Far from cutting staff, Pichai confirmed Google plans to hire more engineers in 2025, arguing that AI expands possibilities rather than reducing demand.
‘The opportunity space of what we can do is expanding too,’ he said. ‘It makes coding more fun and frees you up for creativity, problem-solving, and brainstorming.’
Rather than replacing jobs, Pichai sees AI as a companion—handling repetitive tasks and enabling engineers to focus on innovation. He believes this shift will also democratise software development, empowering more people to build and create with code.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.
High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.
In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.
In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.
Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.
While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.
The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.
Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.
Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.
The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.
The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.
Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.
Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.
The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.
It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.
The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.
The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.
The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.
Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.
Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.
Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.
Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.
Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.
The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.
The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.
By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.
X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.
The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has launched a new line of AI models, Claude Gov, explicitly tailored for US national security operations. Built with direct input from government clients, top-tier agencies already use the models.
These classified-use models were developed with enhanced safety testing and are optimised for handling sensitive material, including improved handling of classified data, rare language proficiency, and defence-specific document comprehension.
The Claude Gov models reflect Anthropic’s broader move into government partnerships, building on its collaboration with Palantir and AWS.
As competition in defence-focused AI intensifies, rivals including OpenAI, Meta, and Google are also adapting their models for secure environments.
The sector’s growing interest in custom, security-conscious AI tools marks a shift in how leading labs seek stable revenue streams and deeper ties with government agencies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
ABBA legend Bjorn Ulvaeus is working on a new musical with the help of AI, describing the technology as ‘an extension of your mind.’ Despite previously criticising AI companies’ unlicensed use of artists’ work, the 80-year-old Swedish songwriter believes AI can be a valuable creative partner.
At London’s inaugural SXSW, Ulvaeus explained how he uses AI tools to explore lyrical ideas and overcome writer’s block. ‘It is like having another songwriter in the room with a huge reference frame,’ he said.
‘You can prompt a lyric and ask where to go from there. It usually comes out with garbage, but sometimes something in it gives you another idea.’
Ulvaeus was among over 10,000 creatives who signed an open letter warning of the risks AI poses to artists’ rights. Still, he maintains that when used with consent and care, AI can support — not replace — human creativity. ‘It must not exclude the human,’ he warned.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!