D3FEND 1.0 brings structured security graphs

MITRE has unveiled its new Cyber Attack–Defense (CAD) tool as part of the D3FEND 1.0 release, offering security teams a structured way to model and counter cyber threats.

The browser‑based interface lets users build ‘D3FEND Graphs’—knowledge graphs grounded in a rich cybersecurity ontology—instead of relying on ad hoc PowerPoint diagrams.

Graph components include Attack nodes (tied to MITRE ATT&CK techniques), Countermeasure nodes (D3FEND defensive measures) and Digital Artifact nodes (elements from the D3FEND artifact ontology).

A drag‑and‑drop canvas enables rapid scene‑setting, while an ‘explode’ feature reveals related attack paths, defences or artefacts drawn from the ontology’s knowledge base.

Organisations can apply the CAD tool across threat intelligence, security engineering, detection scenario planning, incident investigation and risk assessments.

Exports in JSON, TTL or PNG support collaboration, and STIX 2.1 import ensures seamless threat data integration. Users may also extend the underlying ontology to capture emerging techniques.

Built in partnership with the NSA and various defence departments, D3FEND 1.0 and its CAD tool establish a common vocabulary and conceptual framework for cybersecurity operations.

As threats grow ever more complex, a methodical, semantically rigorous approach to modelling defences is set to become indispensable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSMC struggles to block chip exports to China

Taiwan Semiconductor Manufacturing Company (TSMC) has acknowledged it faces significant challenges in ensuring its advanced chips do not end up with sanctioned entities in China, despite tightening export controls.

The company admitted in its latest annual report that its position as a contract chipmaker limits its visibility into how and where its semiconductors are ultimately used.

Instead of directly selling finished products, TSMC manufactures chips for firms like Nvidia and Qualcomm, which are then integrated into a wide range of devices by third parties.

Α layered supply chain structure like this makes it difficult for the company to guarantee full compliance with export restrictions, especially when intermediaries may divert shipments intentionally.

TSMC halted deliveries to a customer last year after discovering one of its AI chips had been diverted to Huawei, a Chinese tech giant on the US sanctions list. The company promptly notified both Washington and Taipei and has since cooperated with official investigations and information requests.

The US continues to tighten restrictions on advanced chip exports to China, urging companies like TSMC and Samsung to apply stricter scrutiny.

Recently, Washington blacklisted 16 Chinese entities, including firms allegedly linked to the unauthorised transfer of TSMC chips. Despite best efforts, TSMC says there is no assurance it can completely prevent such incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSMC profits surge despite trade concerns

Taiwan Semiconductor Manufacturing Company (TSMC) posted a significant jump in quarterly profits, driven by robust demand for AI chips. Net income rose by just over 60% year-on-year to NT$360.7bn (£9.77bn), outpacing analysts’ expectations.

Revenue also grew by 41.6% compared to the same period in 2024, although it dipped slightly from the previous quarter due to weaker smartphone sales.

The world’s largest contract chipmaker has not yet seen any major changes in customer behaviour, including from Apple and Nvidia, despite increasing uncertainty over potential US tariffs on Taiwanese semiconductors.

While concerns about trade tensions grow, particularly with former President Donald Trump suggesting the US should reclaim chip production, TSMC says it is continuing with business as usual for now.

Instead of scaling back, TSMC is expanding its investment in the US, with plans to spend up to $160bn. Analysts believe this move could help the firm argue for a more favourable position should tariff negotiations intensify.

The company’s Chief Financial Officer, Wendell Huang, acknowledged the risks posed by changing trade policies but said revenue growth is still expected in the next quarter.

Despite global pressures, TSMC remains optimistic, forecasting revenue between $28.4bn and $29.2bn. Although the company’s shares have fallen more than 20% so far this year, some analysts say the stock is now undervalued and well-positioned to rebound once market conditions stabilise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD warns of financial hit from US AI chip export ban

AMD has warned that new US government restrictions on exporting AI chips to China and several other countries could materially affect its earnings.

The company said it may face charges of up to $800 million related to unsold inventory, purchase commitments, and reserves if it fails to secure export licences for its MI308 GPUs, now subject to strict control measures.

In a filing to the US Securities and Exchange Commission, AMD confirmed it would seek the necessary licences but admitted there is no guarantee they will be granted.

The move follows broader export restrictions aimed at protecting national security interests, with US officials arguing that unrestricted access to advanced chips would weaken the country’s strategic lead in AI, instead of preserving it.

AMD’s stock dropped around 6% following the announcement. Competitors are also feeling the impact. Nvidia expects charges of $5.5 billion from similar restrictions, and Intel’s Gaudi hardware line has reportedly been affected as well.

The US Commerce Department has defended the move as necessary to safeguard economic and national interests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI pushes Grok forward with memory update

Elon Musk’s AI venture, xAI, has introduced a new ‘memory’ feature for its Grok chatbot in a bid to compete more closely with established rivals like ChatGPT and Google’s Gemini.

The update allows Grok to remember details from past conversations, enabling it to provide more personalised responses when asked for advice or recommendations, instead of offering generic answers.

Unlike before, Grok can now ‘learn’ a user’s preferences over time, provided it’s used frequently enough. The move mirrors similar features from competitors, with ChatGPT already referencing full chat histories and Gemini using persistent memory to shape its replies.

According to xAI, the memory is fully transparent. Users can view what Grok has remembered and choose to delete specific entries at any time.

The memory function is currently available in beta on Grok’s website and mobile apps, although not yet accessible to users in the EU or UK.

Instead of being automatically enabled, it can be turned off in the settings menu under Data Controls. Deleting individual memories is also possible via the web chat interface, with Android support expected shortly.

xAI has confirmed it is working on adding memory support to Grok’s version on X. However, this expansion aims to deepen the bot’s integration with users’ digital lives instead of limiting the experience to one platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum spin breakthrough at room temperature

South Korean researchers have discovered a way to generate much stronger spin currents at room temperature, potentially transforming the future of electronics.

By using a mechanism called longitudinal spin pumping and a special iron-rhodium material, the team showed that quantum magnetisation dynamics, once thought to only occur at extremely low temperatures, can take place in everyday conditions.

These currents were found to be 10 times stronger than those created through traditional methods, offering a major boost for low-power, high-performance devices.

Instead of relying on the movement of electric charge, spintronics makes use of the electron’s spin, which reduces energy loss and heat generation. This advancement could be particularly beneficial for Magnetoresistive Random Access Memory (MRAM), a type of memory that depends on spin currents to function.

Researchers believe their findings may significantly cut power consumption in MRAM, which is already being explored by companies like Samsung for next-generation AI computing systems.

The study, carried out by teams at KAIST and Sogang University, used a combination of ultrafast measurement experiments and theoretical analysis to validate the discovery. Experts say the results could lead to a new era of energy-efficient memory and processor technologies.

Instead of stopping here, the researchers now plan to develop novel spintronic device architectures and explore other quantum-based mechanisms to push the limits of what modern electronics can achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hamburg Declaration champions responsible AI

The Hamburg Declaration on Responsible AI for the Sustainable Development Goals (SDGs) is a new global initiative jointly launched by the United Nations Development Programme (UNDP) and Germany’s Federal Ministry for Economic Cooperation and Development (BMZ).

The Declaration seeks to build a shared vision for AI that supports fair, inclusive, and sustainable global development. It is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.

The initiative brings together voices from across sectors—governments, civil society, academia, and industry—to shape how AI can ethically and effectively align with the SDGs. Central to this effort is an open consultation process inviting stakeholders to provide feedback on the draft declaration, participate in expert discussions, and endorse its principles.

In addition to the declaration itself, the initiative also features the AI SDG Compendium, a global registry of AI projects contributing to sustainable development. The process has already gained visibility at major international forums like the Internet Governance Forum and the AI Action Summit in Paris, reflecting its growing significance in leveraging responsible AI for the SDGs.

The Declaration aims to ensure that AI is developed and used in ways that respect human rights, reduce inequalities, and foster sustainable progress. Establishing shared principles and promoting collaboration across sectors and regions sets a foundation for responsible AI that serves both people and the planet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft unveils powerful lightweight AI model for CPUs

Microsoft researchers have introduced the largest 1-bit AI model to date, called BitNet b1.58 2B4T, designed to run efficiently on standard CPUs instead of relying on GPUs. This ‘bitnet’ model, now openly available under the MIT license, can even operate on Apple’s M2 chips.

Bitnets use extreme weight quantisation, storing only -1, 0, or 1 as values, making them far more memory- and compute-efficient than most conventional models.

With 2 billion parameters and trained on 4 trillion tokens, roughly the equivalent of 33 million books, BitNet b1.58 2B4T outperforms several similarly sized models in key benchmarks.

Microsoft claims it beats Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B on tasks like grade-school maths and physical reasoning. It also runs up to twice as fast while using significantly less memory, offering a potential edge for lower-end or energy-constrained devices.

The main limitation lies in its dependence on Microsoft’s custom bitnet.cpp framework, which supports only select hardware and does not yet work with GPUs.

Instead of being broadly compatible with existing AI systems, BitNet’s performance depends on a narrower infrastructure, a hurdle that may limit adoption, despite its promise for lightweight AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!