AI music tools arrive for YouTube creators

YouTube is trialling two new features to improve user engagement and content creation. One enhances comment readability, while the other helps creators produce music using AI for Shorts.

A new threaded layout is being tested to organise comment replies under the original post, allowing more explicit and focused conversations. Currently, this feature is limited to a small group of Premium users on mobile.

YouTube also expands Dream Track, an AI-powered tool that creates 30-second music clips from simple text prompts. Creators can generate sounds matching moods like ‘chill piano melody’ or ‘energetic pop beat’, with the option to include AI-generated vocals styled after popular artists.

Both features are available only in the US during the testing phase, with no set date for international release. YouTube’s gradual updates reflect a shift toward more intuitive user experiences and creative flexibility on the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind engineers join Microsoft’s AI team

Microsoft has aggressively expanded its AI workforce by hiring over 20 specialists from Google’s DeepMind research lab in recent months. Notable recruits, now part of Microsoft AI under EVP Mustafa Suleyman, include former DeepMind engineering head Amar Subramanya, product managers and research scientists such as Sonal Gupta, Adam Sadovsky, Tim Frank, Dominic King, and Christopher Kelly.

This talent influx aligns with Suleyman’s leadership of Microsoft’s consumer AI division, which is responsible for Copilot, Bing, and Edge, and underscores the company’s push to solidify its lead in personal AI experiences. Meanwhile, this hiring effort unfolds against a backdrop of 9,000 layoffs globally, highlighting Microsoft’s strategy to redeploy resources toward AI innovation.

However, regulators are scrutinising the move. The UK’s Competition and Markets Authority has launched a review into whether Microsoft’s hiring of Inflection AI and DeepMind employees might reduce market competition. Microsoft maintains that its practice fosters, rather than limits, industry advancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ASEAN urged to unite on digital infrastructure

Asia stands at a pivotal moment as policymakers urge swift deployment of converging 5G and AI technologies. Experts argue that 5G should be treated as a foundational enabler for AI, not just a telecom upgrade, to power future industries.

A report from the Lee Kuan Yew School of Public Policy identifies ten urgent imperatives, notably forming national 5G‑AI strategies, empowering central coordination bodies and modernising spectrum policies. Industry leaders stress that aligning 5G and AI investment is essential to sustain innovation.

Without firm action, the digital divide could deepen and stall progress. Coordinated adoption and skilled workforce development are seen as critical to turning incremental gains into transformational regional leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon buys Bee AI, the startup that listens to your day

Amazon has acquired Bee AI, a San Francisco-based startup known for its $50 wearable that listens to conversations and provides AI-generated summaries and reminders.

The deal was confirmed by Bee co-founder Maria de Lourdes Zollo in a LinkedIn post on Wednesday, but the acquisition terms were not disclosed. Bee gained attention earlier this year at CES in Las Vegas, where it unveiled a Fitbit-like bracelet using AI to deliver personal insights.

The device received strong feedback for its ability to analyse conversations and create to-do lists, reminders, and daily summaries. Bee also offers a $19-per-month subscription and an Apple Watch app. It raised $7 million before being acquired by Amazon.

‘When we started Bee, we imagined a world where AI is truly personal,’ Zollo wrote. ‘That dream now finds a new home at Amazon.’ Amazon confirmed the acquisition and is expected to integrate Bee’s technology into its expanding AI device strategy.

The company recently updated Alexa with generative AI and added similar features to Ring, its home security brand. Amazon’s hardware division is now led by Panos Panay, the former Microsoft executive who led Surface and Windows 11 development.

Bee’s acquisition suggests Amazon is exploring its own AI-powered wearable to compete in the rapidly evolving consumer tech space. It remains unclear whether Bee will operate independently or be folded into Amazon’s existing device ecosystem.

Privacy concerns have surrounded Bee, as its wearable records audio in real time. The company claims no recordings are stored or used for AI training. Bee insists that users can delete their data at any time. However, privacy groups have flagged potential risks.

The AI hardware market has seen mixed success. Meta’s Ray-Ban smart glasses gained traction, but others like the Rabbit R1 flopped. The Humane AI Pin also failed commercially and was recently sold to HP. Consumers remain cautious of always-on AI devices.

OpenAI is also moving into hardware. In May, it acquired Jony Ive’s AI startup, io, for a reported $6.4 billion. OpenAI has hinted at plans to develop a screenless wearable, joining the race to create ambient AI tools for daily life.

Bee’s transition from startup to Amazon acquisition reflects how big tech is absorbing innovation in ambient, voice-first AI. Amazon’s plans for Bee remain to be seen, but the move could mark a turning point for AI wearables if executed effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon closes AI research lab in Shanghai as global focus shifts

Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.

The company said the decision was part of a global streamlining effort rather than a response to AI concerns.

A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.

Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.

The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.

Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.

Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian researchers expose watermark flaws

A team at the University of Maryland found that adversarial attacks easily strip most watermarking technologies designed to label AI‑generated images. Their study reveals that even visible watermarks fail to indicate content provenance reliably.

The US researchers tested low‑perturbation invisible watermarks and more robust visible ones, demonstrating that adversaries can easily remove or forge marks. Lead author Soheil Feizi noted the technology is far from foolproof, warning that ‘we broke all of them’.

Despite these concerns, experts argue that watermarking can still be helpful in a broader detection strategy. UC Berkeley professor Hany Farid said robust watermarking is ‘part of the solution’ when combined with other forensic methods.

Tech giants and researchers continue to develop watermarking tools like Google DeepMind’s SynthID, though such systems are not considered infallible. The consensus emerging from recent tests is that watermarking alone cannot be relied upon to counter deepfake threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk denies fundraising as xAI eyes supercluster growth

According to sources cited by the Wall Street Journal, Elon Musk’s AI company xAI is reportedly working with Valor Equity Partners to raise to US$12 billion for expansion.

Valor, an investment firm founded by Antonio Gracias, a long-time associate of Musk, is in discussions with lenders to secure the capital.

Funds would be used to acquire a substantial number of Nvidia AI chips, which would then be leased to xAI to support a new large-scale data centre for training and running the Grok chatbot.

Neither Valor nor xAI provided comments in response to media enquiries. Some financial institutions involved in the talks have reportedly pushed for repayment within three years and are seeking to limit borrowing amounts to reduce risk exposure.

Developing and deploying advanced AI systems requires a vast investment in hardware, computational resources and specialist talent. Companies like OpenAI, Google and China-based DeepSeek compete intensely in this domain.

In a post on X, Musk confirmed that Grok is being trained using a supercluster with 230,000 GPUs, including 30,000 of Nvidia’s GB200 chips. Another supercluster will launch soon, beginning with 550,000 GB200 and GB300 chips.

Reports suggest xAI may spend around US$13 billion in 2025. Earlier in July, Financial Times reported that xAI was discussing raising funds in a deal potentially valuing the firm between US$170 billion and US$200 billion.

In response to those claims, Musk denied that fundraising was ongoing, stating: ‘We have plenty of capital.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen builds Hindi AI tool to help paralysis patients speak

An Indian teenager has created a low-cost AI device that translates slurred speech into clear Hindi, helping patients with paralysis and neurological conditions communicate more easily.

Pranet Khetan’s innovation, Paraspeak, uses a custom Hindi speech recognition model to address a long-ignored area of assistive tech.

The device was inspired by Khetan’s visit to a paralysis care centre, where he saw patients struggling to express themselves. Unlike existing English models, Paraspeak is trained on the first Hindi dysarthic speech dataset in India, created by Khetan himself through recordings and data augmentation.

Using transformer architecture, Paraspeak converts unclear speech into understandable output using cloud processing and a neck-worn compact device. It is designed to be scalable across different speakers, unlike current solutions that only work for individual patients.

The AI device is affordable, costing around ₹2,000 to build, and is already undergoing real-world testing. With no existing market-ready alternative for Hindi speakers, Paraspeak represents a significant step forward in inclusive health technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!