AI model boosts accuracy in ranking harmful genetic variants

Researchers have unveiled a new AI model that ranks genetic variants based on their severity. The approach combines deep evolutionary signals with population data to highlight clinically relevant mutations.

The popEVE system integrates protein-scale models with constraints drawn from major genomic databases. Its combined scoring separates harmful missense variants more accurately than leading diagnostic tools.

Clinical tests showed strong performance in developmental disorder cohorts, where damaging mutations clustered clearly. The model also pinpointed likely causal variants in unsolved cases without parental genomes.

Researchers identified hundreds of credible candidate genes with structural and functional support. Findings suggest that AI could accelerate rare disease diagnoses and inform precision counselling worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube criticises Australia’s new youth social-media restrictions

Australia’s forthcoming ban on social media accounts for users under 16 has prompted intense criticism from YouTube, which argues that the new law will undermine existing child safety measures.

The report notes that from 10 December, young users will be logged out of their accounts and barred from posting or uploading content, though they will still be able to watch videos without signing in.

YouTube said the policy will remove key parental-control tools, such as content filters, channel blocking and well-being reminders, which only function for logged-in accounts.

Rachel Lord, Google and YouTube public-policy lead for Australia, described the measure as ‘rushed regulation’ and warned the changes could make children ‘less safe’ by stripping away long-established protections.

Communications Minister Anika Wells rejected this criticism as ‘outright weird’, arguing that if YouTube believes its own platform is unsafe for young users, it must address that problem itself.

The debate comes as Australia’s eSafety Commissioner investigates other youth-focused apps such as Lemon8 and Yope, which have seen a surge in downloads ahead of the ban.

Regulators reversed YouTube’s earlier exemption in July after identifying it as the platform where 10- to 15-year-olds most frequently encountered harmful content.

Under the new Social Media Minimum Age Act, companies must deactivate underage accounts, prevent new sign-ups and halt any technical workarounds or face penalties of up to A$49.5m.

Officials say the measure responds to concerns about the impact of algorithms, notifications and constant connectivity on Gen Alpha. Wells said the law aims to reduce the ‘dopamine drip’ that keeps young users hooked to their feeds, calling it a necessary step to shield children from relentless online pressures.

YouTube has reportedly considered challenging its inclusion in the ban, but has not confirmed whether it will take legal action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Public backlash grows as Coupang faces scrutiny over massive data leak

South Korea is facing broader concerns about data governance following Coupang’s confirmation of a breach affecting 33.7 million accounts. Investigators say the leak began months before it was detected, highlighting weak access controls and delayed monitoring across major firms.

Authorities believe a former employee exploited long-valid server tokens and unrevoked permissions to extract customer records. Officials say the scale of the incident underscores persistent gaps in offboarding processes and basic internal safeguards.

Regulators have launched parallel inquiries to assess compliance violations and examine whether structural weaknesses extend beyond a single company. Recent leaks at telecom and financial institutions have raised similar questions about systemic risk.

Public reaction has been intense, with online groups coordinating class-action filings and documenting spikes in spam after the exposure. Many argue that repeated incidents show a more profound corporate reluctance to invest meaningfully in security.

Lawmakers are now signalling plans for more substantial penalties and tighter oversight. Analysts warn that unless companies elevate data protection standards, South Korea will continue to face cascading breaches that damage public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI helps detect congenital heart defects in unborn babies

Mount Sinai doctors in New York City are the first to utilise AI to enhance prenatal ultrasounds and detect congenital heart defects more effectively. BrightHeart’s FDA-approved technology is now used at Mount Sinai-affiliated Carnegie Imaging for Women across three Manhattan locations.

Congenital heart defects affect about 1 in 500 newborns and often require urgent intervention.

A study in Obstetrics & Gynecology found AI-assisted ultrasounds detected major defects with over 97 percent accuracy, cut reading time by 18 percent, and raised confidence scores by 19 percent.

The study reviewed 200 fetal ultrasounds from 11 centres across two countries, with and without AI assistance, by obstetricians and maternal-fetal medicine specialists.

AI improved detection, confidence, and efficiency, especially in centres without specialised fetal heart experts.

Experts say AI can level the field of prenatal diagnosis and optimise patient care. Dr Lam-Rachlin and Dr Rebarber emphasised AI’s potential to standardise detection and urged further research for routine clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA platform lifts leading MoE models

Frontier developers are adopting a mixture-of-experts architecture as the foundation for their most advanced open-source models. Designers now rely on specialised experts that activate only when needed instead of forcing every parameter to work on each token.

Major models, such as DeepSeek-R1, Kimi K2 Thinking, and Mistral Large 3, rise to the top of the Artificial Analysis leaderboard by utilising this pattern to combine greater capability with lower computational strain.

Scaling the architecture has always been the main obstacle. Expert parallelism requires high-speed memory access and near-instant communication between multiple GPUs, yet traditional systems often create bottlenecks that slow down training and inference.

NVIDIA has shifted toward extreme hardware and software codesign to remove those constraints.

The GB200 NVL72 rack-scale system links seventy-two Blackwell GPUs via fast shared memory and a dense NVLink fabric, enabling experts to exchange information rapidly, rather than relying on slower network layers.

Model developers report significant improvements once they deploy MoE designs on NVL72. Performance leaps of up to ten times have been recorded for frontier systems, improving latency, energy efficiency and the overall cost of running large-scale inference.

Cloud providers integrate the platform to support customers in building agentic workflows and multimodal systems that route tasks between specialised components, rather than duplicating full models for each purpose.

Industry adoption signals a shift toward a future where efficiency and intelligence evolve together. MoE has become the preferred architecture for state-of-the-art reasoning, and NVL72 offers a practical route for enterprises seeking predictable performance gains.

NVIDIA positions its roadmap, including the forthcoming Vera Rubin architecture, as the next step in expanding the scale and capability of frontier AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS launches frontier agents to boost software development

AWS has launched frontier agents, autonomous AI tools that extend software development teams. The first three – Kiro, AWS Security Agent, and AWS DevOps Agent – enhance development, security, and operations while working independently for extended periods.

Kiro functions as a virtual developer, maintaining context, learning from feedback, and managing tasks across multiple repositories. AWS Security Agent automates code reviews, penetration testing, and enforces organisational security standards.

AWS DevOps Agent identifies root causes of incidents, reduces alerts, and provides proactive recommendations to improve system reliability.

These agents operate autonomously, scale across multiple tasks, and free teams from repetitive work, allowing focus on high-priority projects. Early users, including SmugMug and Commonwealth Bank of Australia, report quicker development, stronger security, and more efficient operations.

By integrating frontier agents into the software development lifecycle, AWS is shifting AI from task assistance to completing complex projects independently, marking a significant step forward in what AI can achieve for development teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon rolls out Trainium3 AI chip to challenge Nvidia’s dominance

AWS has launched its in-house AI processor, Trainium3, marking a fresh push to compete with established players in the AI-hardware market. The chip and its associated UltraServer platform were unveiled at the launch event in Las Vegas.

According to Amazon, servers powered by Trainium3 deliver more than four times the performance of the previous generation while using around 40% less energy. Several AI firms, including startups working on large language models, are already utilising the new hardware to reduce their inference or training costs.

Looking ahead, AWS has signalled plans for a follow-up chip, Trainium4, which is expected to integrate with Nvidia’s NVLink Fusion interconnect technology. That would permit hybrid deployments combining Amazon’s ASICs with traditional GPUs, potentially appealing to AI workloads already built around Nvidia’s ecosystem.

The move highlights a broader trend: major tech firms are increasingly investing in their own AI infrastructure, aiming to reduce dependence on dominant vendors and lower costs. As AWS scales out its custom chips, the AI infrastructure market is poised to become more diverse with price-performance and energy efficiency as key differentiators, rather than raw hardware dominance alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK ministers advance energy plans for AI expansion

The final AI Energy Council meeting of 2025 took place in London, led by AI Minister Kanishka Narayan alongside energy ministers Lord Vallance and Michael Shanks.

Regulators and industry representatives reviewed how the UK can expedite grid connections and support the necessary infrastructure for expanding AI activity nationwide.

Council members examined progress on government measures intended to accelerate connections for AI data centres. Plans include support for AI Growth Zones, with discounted electricity available for sites able to draw on excess capacity, which is expected to reduce pressure in the broader network.

Ministers underlined AI’s role in national economic ambitions, noting recent announcements of new AI Growth Zones in North East England and in North and South Wales.

They also discussed how forthcoming reforms are expected to help deliver AI-related infrastructure by easing access to grid capacity.

The meeting concluded with a focus on long-term energy needs for AI development. Participants explored ways to unlock additional capacity and considered innovative options for power generation, including self-build solutions.

The council will reconvene in early 2026 to continue work on sustainable approaches for future AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!