Cyber Resilience Act signals a major shift in EU product security

EU regulators are preparing to enforce the Cyber Resilience Act, setting core security requirements for digital products in the European market. The law spans software, hardware, and firmware, establishing shared expectations for secure development and maintenance.

Scope captures apps, embedded systems, and cloud-linked features. Risk classes run from default to critical, directing firms to self-assess or undergo third-party checks. Any product sold beyond December 2027 must align with the regulation.

Obligations apply to manufacturers, importers, distributors, and developers. Duties include secure-by-design practices, documented risk analysis, disclosure procedures, and long-term support. Firms must notify ENISA within 24 hours of active exploitation and provide follow-up reports on a strict timeline.

Compliance requires technical files covering threat assessments, update plans, and software bills of materials. High-risk categories demand third-party evaluation, while lower-risk segments may rely on internal checks. Existing certifications help, but cannot replace CRA-specific conformity work.

Non-compliance risks fines, market restrictions, and reputational damage. Organisations preparing early are urged to classify products, run gap assessments, build structured roadmaps, and align development cycles with CRA guidance. EU authorities plan to provide templates and support as firms transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT users gain Jira and Confluence access through Atlassian’s MCP connector

Atlassian has launched a new connector that lets ChatGPT users access Jira and Confluence data via the Model Context Protocol. The company said the Rovo MCP Connector supports task summarisation, issue creation and workflow automation directly inside ChatGPT.

Atlassian noted rising demand for integrations beyond its initial beta ecosystem. Users in Europe and elsewhere can now draw on Jira and Confluence data without switching interfaces, while partners such as Figma and HubSpot continue to expand the MCP network.

Engineering, marketing and service teams can request summaries, monitor task progress and generate issues from within ChatGPT. Users can also automate multi-step actions, including bulk updates. Jira write-back support enables changes to be pushed directly into project workflows.

Security updates sit alongside the connector release. Atlassian said the Rovo MCP Server uses OAuth authentication and respects existing permissions across Jira and Confluence spaces. Administrators can also enforce an allowlist to control which clients may connect.

Atlassian frames the initiative as part of its long-term focus on open collaboration. The company said the connector reflects demand for tools that unify context, search and automation, positioning the MCP approach as a flexible extension of existing team practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AstraZeneca backs Pangaea’s AI platform to scale precision healthcare

Pangaea Data, a health-tech firm specialising in patient-intelligence platforms, announced a strategic, multi-year partnership with AstraZeneca to deploy multimodal artificial intelligence in clinical settings. The goal is to bring AI-driven, data-rich clinical decision-making to scale, improving how patients are identified, diagnosed, treated and connected to therapies or clinical trials.

The collaboration will see AstraZeneca sponsoring the configuration, validation and deployment of Pangaea’s enterprise-grade platform, which merges large-scale clinical, imaging, genomic, pathology and real-world data. It will also leverage generative and predictive AI capabilities from Microsoft and NVIDIA for model training and deployment.

Among the planned applications are supporting point-of-care treatment decisions and identifying patients who are undiagnosed, undertreated or misdiagnosed, across diseases ranging from chronic conditions to cancer.

Pangaea’s CEO said the partnership aims to efficiently connect patients to life-changing therapies and trials in a compliant, financially sustainable way. For AstraZeneca, the effort reflects a broader push to integrate AI-driven precision medicine across its R&D and healthcare delivery pipeline.

From a policy and health-governance standpoint, this alliance is important. It demonstrates how multimodal AI, combining different data types beyond standard medical records, is being viewed not just as a research tool, but as a potentially transformative element of clinical care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA platform lifts leading MoE models

Frontier developers are adopting a mixture-of-experts architecture as the foundation for their most advanced open-source models. Designers now rely on specialised experts that activate only when needed instead of forcing every parameter to work on each token.

Major models, such as DeepSeek-R1, Kimi K2 Thinking, and Mistral Large 3, rise to the top of the Artificial Analysis leaderboard by utilising this pattern to combine greater capability with lower computational strain.

Scaling the architecture has always been the main obstacle. Expert parallelism requires high-speed memory access and near-instant communication between multiple GPUs, yet traditional systems often create bottlenecks that slow down training and inference.

NVIDIA has shifted toward extreme hardware and software codesign to remove those constraints.

The GB200 NVL72 rack-scale system links seventy-two Blackwell GPUs via fast shared memory and a dense NVLink fabric, enabling experts to exchange information rapidly, rather than relying on slower network layers.

Model developers report significant improvements once they deploy MoE designs on NVL72. Performance leaps of up to ten times have been recorded for frontier systems, improving latency, energy efficiency and the overall cost of running large-scale inference.

Cloud providers integrate the platform to support customers in building agentic workflows and multimodal systems that route tasks between specialised components, rather than duplicating full models for each purpose.

Industry adoption signals a shift toward a future where efficiency and intelligence evolve together. MoE has become the preferred architecture for state-of-the-art reasoning, and NVL72 offers a practical route for enterprises seeking predictable performance gains.

NVIDIA positions its roadmap, including the forthcoming Vera Rubin architecture, as the next step in expanding the scale and capability of frontier AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS launches frontier agents to boost software development

AWS has launched frontier agents, autonomous AI tools that extend software development teams. The first three – Kiro, AWS Security Agent, and AWS DevOps Agent – enhance development, security, and operations while working independently for extended periods.

Kiro functions as a virtual developer, maintaining context, learning from feedback, and managing tasks across multiple repositories. AWS Security Agent automates code reviews, penetration testing, and enforces organisational security standards.

AWS DevOps Agent identifies root causes of incidents, reduces alerts, and provides proactive recommendations to improve system reliability.

These agents operate autonomously, scale across multiple tasks, and free teams from repetitive work, allowing focus on high-priority projects. Early users, including SmugMug and Commonwealth Bank of Australia, report quicker development, stronger security, and more efficient operations.

By integrating frontier agents into the software development lifecycle, AWS is shifting AI from task assistance to completing complex projects independently, marking a significant step forward in what AI can achieve for development teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!