Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA platform lifts leading MoE models

Frontier developers are adopting a mixture-of-experts architecture as the foundation for their most advanced open-source models. Designers now rely on specialised experts that activate only when needed instead of forcing every parameter to work on each token.

Major models, such as DeepSeek-R1, Kimi K2 Thinking, and Mistral Large 3, rise to the top of the Artificial Analysis leaderboard by utilising this pattern to combine greater capability with lower computational strain.

Scaling the architecture has always been the main obstacle. Expert parallelism requires high-speed memory access and near-instant communication between multiple GPUs, yet traditional systems often create bottlenecks that slow down training and inference.

NVIDIA has shifted toward extreme hardware and software codesign to remove those constraints.

The GB200 NVL72 rack-scale system links seventy-two Blackwell GPUs via fast shared memory and a dense NVLink fabric, enabling experts to exchange information rapidly, rather than relying on slower network layers.

Model developers report significant improvements once they deploy MoE designs on NVL72. Performance leaps of up to ten times have been recorded for frontier systems, improving latency, energy efficiency and the overall cost of running large-scale inference.

Cloud providers integrate the platform to support customers in building agentic workflows and multimodal systems that route tasks between specialised components, rather than duplicating full models for each purpose.

Industry adoption signals a shift toward a future where efficiency and intelligence evolve together. MoE has become the preferred architecture for state-of-the-art reasoning, and NVL72 offers a practical route for enterprises seeking predictable performance gains.

NVIDIA positions its roadmap, including the forthcoming Vera Rubin architecture, as the next step in expanding the scale and capability of frontier AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS launches frontier agents to boost software development

AWS has launched frontier agents, autonomous AI tools that extend software development teams. The first three – Kiro, AWS Security Agent, and AWS DevOps Agent – enhance development, security, and operations while working independently for extended periods.

Kiro functions as a virtual developer, maintaining context, learning from feedback, and managing tasks across multiple repositories. AWS Security Agent automates code reviews, penetration testing, and enforces organisational security standards.

AWS DevOps Agent identifies root causes of incidents, reduces alerts, and provides proactive recommendations to improve system reliability.

These agents operate autonomously, scale across multiple tasks, and free teams from repetitive work, allowing focus on high-priority projects. Early users, including SmugMug and Commonwealth Bank of Australia, report quicker development, stronger security, and more efficient operations.

By integrating frontier agents into the software development lifecycle, AWS is shifting AI from task assistance to completing complex projects independently, marking a significant step forward in what AI can achieve for development teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK ministers advance energy plans for AI expansion

The final AI Energy Council meeting of 2025 took place in London, led by AI Minister Kanishka Narayan alongside energy ministers Lord Vallance and Michael Shanks.

Regulators and industry representatives reviewed how the UK can expedite grid connections and support the necessary infrastructure for expanding AI activity nationwide.

Council members examined progress on government measures intended to accelerate connections for AI data centres. Plans include support for AI Growth Zones, with discounted electricity available for sites able to draw on excess capacity, which is expected to reduce pressure in the broader network.

Ministers underlined AI’s role in national economic ambitions, noting recent announcements of new AI Growth Zones in North East England and in North and South Wales.

They also discussed how forthcoming reforms are expected to help deliver AI-related infrastructure by easing access to grid capacity.

The meeting concluded with a focus on long-term energy needs for AI development. Participants explored ways to unlock additional capacity and considered innovative options for power generation, including self-build solutions.

The council will reconvene in early 2026 to continue work on sustainable approaches for future AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU states strike deal on chat-scanning law

EU member states have finally reached a unified stance on a long-debated law aimed at tackling online child sexual abuse, ending years of stalemate driven by fierce privacy concerns. Governments agreed to drop the most controversial element of the original proposal, mandatory scanning of private messages, after repeated blockages and public opposition from privacy advocates who warned it would amount to mass surveillance.

The move comes as reports of child abuse material continue to surge, with global hotlines processing nearly 2.5 million suspected images last year.

The compromise, pushed forward under Denmark’s Council presidency, maintains the option for tech companies to scan content voluntarily while affirming that end-to-end encryption must not be compromised. Supporters argue that the agreement closes a regulatory gap that will occur when temporary EU rules allowing voluntary detection expire in 2026.

However, children’s rights groups argue that the Council has not gone far enough, saying that simply preserving the current system will not adequately address the scale of the problem.

Privacy campaigners remain alarmed. Critics fear that framing voluntary scanning as a risk-reduction measure could encourage platforms to expand surveillance of user communications to shield themselves from liability.

Former MEP Patrick Breyer, a prominent voice in the campaign against so-called ‘chat control,’ warned that the compromise could still lead to widespread monitoring and possibly age-verification requirements that limit access to digital services.

With the Council and European Parliament now holding formal positions, negotiations will finally begin on the regulation’s final shape. But with political divisions still deep and the clock ticking toward the 2026 deadline, it may be months before the EU determines how far it is willing to go in regulating the detection of child sexual abuse material, and at what cost to users’ privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

eSafety highlights risks in connected vehicle technology

Australia’s eSafety regulator is drawing attention to concerns about how connected car features can be misused within domestic and family violence situations.

Reports from frontline workers indicate that remote access tools, trip records and location tracking can be exploited instead of serving their intended purpose as safety and convenience features.

The Australian regulator stresses that increased connectivity across vehicles and devices is creating new challenges for those supporting victim-survivors.

Smart cars often store detailed travel information and allow remote commands through apps and online accounts. These functions can be accessed by someone with shared credentials or linked accounts, which can expose sensitive information.

eSafety notes that misuse of connected vehicles forms part of a broader pattern of technology-facilitated coercive control, where multiple smart devices such as watches, tablets, cameras and televisions can play a role.

The regulator has produced updated guidance to help people understand potential risks and take practical steps with the support of specialist services.

Officials highlight the importance of stronger safeguards from industry, including simpler methods for revoking access, clearer account transfer processes during separation and more transparent logs showing when remote commands are used.

Retailers and dealerships are encouraged to ensure devices and accounts are reset when ownership changes. eSafety argues that design improvements introduced early can reduce the likelihood of harm, rather than requiring complex responses later.

Agencies and community services continue to assist those affected by domestic and family violence, offering advice on account security, safe device use and available support services.

The guidance aims to help people take protective measures in a controlled and safe way, while emphasising the importance of accessing professional assistance.

eSafety encourages ongoing cooperation between industry, government and frontline workers to manage risks linked to emerging automotive and digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!