Qualcomm to acquire Alphawave for $2.4 billion

Qualcomm has agreed to acquire London-listed semiconductor firm Alphawave for approximately $2.4 billion in cash, aiming to strengthen its position in AI and data centre technologies. Alphawave shares surged 23% in London trading following the announcement.

The deal, offering 183 pence per share, represents a 96% premium over Alphawave’s share price at the end of March. Regulatory and shareholder approvals are still required, with the transaction expected to close in early 2026.

Qualcomm is diversifying beyond smartphones as CEO Cristiano Amon targets growth sectors such as AI hardware. Alphawave, known for high-speed chip connectivity, has gained momentum, especially among US AI customers.

Alphawave’s board unanimously supports the offer, and shareholders representing half the company have already agreed to the deal. In addition to the cash option, Qualcomm is offering stock and security exchange alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung pilots AI coding tool Cline for internal developers

Samsung Electronics is testing a new open-source AI coding assistant called Cline, which is expected to be adopted by its Device eXperience (DX) division as early as next month, according to Yonhap News Agency.

Cline leverages Claude 3.7 Sonnet’s advanced agentic coding capabilities to autonomously handle complex software development tasks. The goal is to significantly boost developer productivity across Samsung’s mobile and home appliance units, which are both part of the DX division.

The move aligns with Samsung’s broader AI for All strategy. Last month, the company created a dedicated AI productivity innovation group within the DX division.

This follows the establishment of an AI centre within its chip business in December 2024, further underscoring the tech giant’s commitment to embedding AI across its operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rednote launches public AI model to rival Alibaba and DeepSeek

Chinese social media giant Rednote, also known as Xiaohongshu, has released its first open-source large language model, dots.llm1, marking a major step in its AI ambitions. The model is now publicly available via Hugging Face, a popular developer platform.

By joining the growing number of firms from China open-sourcing AI models—such as Alibaba and DeepSeek—Rednote aims to foster a developer community, expand global influence, and showcase its technical progress amid US-led restrictions on advanced technology exports.

Open-sourcing also enables collaboration and experimentation in contrast to proprietary models kept under wraps by some US companies.

Although dots.llm1 performs slightly behind cutting-edge models like DeepSeek-V3, its coding capabilities rival Alibaba’s Qwen 2.5 series. The launch follows Rednote’s recent AI-powered search app, Diandian, which helps users explore Xiaohongshu’s ecosystem more intuitively.

The company began investing in large language models shortly after ChatGPT’s debut and has accelerated its AI strategy in recent months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Pichai says AI will free coders to focus on creativity

Google CEO Sundar Pichai has said AI is not a threat to human jobs—particularly in engineering—but rather a tool to make work more creative and efficient.

In a recent interview with Lex Fridman, Pichai explained that AI is already powering productivity across Google, contributing to 30% of code generation and improving overall engineering velocity by around 10%.

Far from cutting staff, Pichai confirmed Google plans to hire more engineers in 2025, arguing that AI expands possibilities rather than reducing demand.

‘The opportunity space of what we can do is expanding too,’ he said. ‘It makes coding more fun and frees you up for creativity, problem-solving, and brainstorming.’

Rather than replacing jobs, Pichai sees AI as a companion—handling repetitive tasks and enabling engineers to focus on innovation. He believes this shift will also democratise software development, empowering more people to build and create with code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum light beats AI at its own game in surprise photonic experiment

A small-scale quantum device developed by researchers at the University of Vienna has outperformed advanced classical machine learning algorithms—including some used in today’s leading AI systems—using just two photons and a glass chip.

The experiment suggests that useful quantum advantage could arrive far sooner than previously thought, not in massive future machines but in today’s modest photonic setups.

The team’s six-mode processor doesn’t rely on raw speed to beat traditional systems. Instead, it harnesses a uniquely quantum property: the way identical particles interfere. This interference naturally computes mathematical structures known as permanents, which are computationally expensive for classical systems.

By embedding these quantum calculations into a pattern-recognition task, the researchers consistently achieved higher classification accuracy across multiple datasets.

Crucially, the device operates with extreme energy efficiency, offering a promising route to sustainable AI. Co-author Iris Agresti highlighted the growing energy costs of modern machine learning and pointed to photonic quantum systems as a potential solution.

These early results could pave the way for new applications in areas where training data is limited and classical methods fall short—redefining the future of AI and quantum computing alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta plans $10 billion investment in Scale AI

Meta Platforms is reportedly in talks to invest over $10 billion in Scale AI, a data labelling startup already backed by Nvidia, Amazon, and Meta itself.

The deal, if finalised, would mark Meta’s largest external investment in AI to date, representing a notable shift away from its prior reliance on in-house research and open-source projects.

Founded in 2016, Scale AI supports the training of AI models through high-quality labelled datasets. It also provides a platform for AI research collaboration, now with contributors in more than 9,000 locations.

The company was last valued at nearly $14 billion following a 2024 funding round involving Meta and Microsoft.

Meta’s planned investment signals an aggressive expansion of its AI ambitions. Earlier this year, CEO Mark Zuckerberg announced up to $65 billion in AI spending for 2025. It includes Meta’s Llama chatbot, now embedded into Facebook, Instagram and WhatsApp, reaching one billion users monthly.

The move puts Meta in closer competition with Microsoft, which has committed over $13 billion to OpenAI, and Amazon and Alphabet, which are backing rival AI firm Anthropic. Scale AI declined to comment, while Meta has yet to respond publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!