D3FEND 1.0 brings structured security graphs

MITRE has unveiled its new Cyber Attack–Defense (CAD) tool as part of the D3FEND 1.0 release, offering security teams a structured way to model and counter cyber threats.

The browser‑based interface lets users build ‘D3FEND Graphs’—knowledge graphs grounded in a rich cybersecurity ontology—instead of relying on ad hoc PowerPoint diagrams.

Graph components include Attack nodes (tied to MITRE ATT&CK techniques), Countermeasure nodes (D3FEND defensive measures) and Digital Artifact nodes (elements from the D3FEND artifact ontology).

A drag‑and‑drop canvas enables rapid scene‑setting, while an ‘explode’ feature reveals related attack paths, defences or artefacts drawn from the ontology’s knowledge base.

Organisations can apply the CAD tool across threat intelligence, security engineering, detection scenario planning, incident investigation and risk assessments.

Exports in JSON, TTL or PNG support collaboration, and STIX 2.1 import ensures seamless threat data integration. Users may also extend the underlying ontology to capture emerging techniques.

Built in partnership with the NSA and various defence departments, D3FEND 1.0 and its CAD tool establish a common vocabulary and conceptual framework for cybersecurity operations.

As threats grow ever more complex, a methodical, semantically rigorous approach to modelling defences is set to become indispensable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI expert Aidan Gomez joins Rivian board

Aidan Gomez, co‑founder and chief executive of AI specialist Cohere, has been appointed to the board of electric‑vehicle maker Rivian, according to a recent regulatory filing.

Rivian expanded its board and elected Gomez for a term running until 2026, signalling the company’s intent to deepen its software credentials.

Gomez brings a distinguished AI pedigree, having co‑authored the seminal 2017 paper ‘Attention Is All You Need’ and led research at Google Brain before launching Cohere in 2019.

Under his leadership, Cohere has trained large‐scale foundation models for enterprise clients such as Oracle and Notion, positioning it at the forefront of generative AI.

Rivian is already collaborating on a $5.8 billion joint venture with Volkswagen to develop and licence its electrical architecture and software. Parallel efforts include the creation of an in‑vehicle AI assistant, overseen by Rivian’s chief software officer, Wassym Bensaid.

Founder and CEO RJ Scaringe praised Gomez’s expertise as vital for integrating ‘cutting‑edge technologies into our products, services, and manufacturing.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Linguists find new purpose in the age of AI

In his latest blog, part of a series expanding on ‘Don’t Waste the Crisis: How AI Can Help Reinvent International Geneva’, Dr Jovan Kurbalija explores how linguists shift from fearing AI to embracing a new era of opportunity. Geneva, home to over a thousand translators and interpreters, has felt the pressure as AI tools like ChatGPT began automating language tasks.

Yet, rather than rendering linguists obsolete, AI is transforming their role, highlighting the enduring importance of human expertise in bridging syntax and semantics—AI’s persistent blind spot. Dr Kurbalija emphasises that while AI excels at recognising patterns, it often fails to grasp meaning, nuance, and cultural context.

This is where linguists step in, offering critical value by enhancing AI’s understanding of language beyond mere structure. From supporting low-resource languages to ensuring ethical AI outputs in sensitive fields like law and diplomacy, linguists are positioned as key players in shaping responsible and context-aware AI systems.

Calling for adaptation over resistance, Dr Kurbalija advocates for linguists to upskill, specialise in areas where human judgement is irreplaceable, collaborate with AI developers, and champion ethical standards. Rather than facing decline, the linguistic profession is entering a renaissance, where embracing syntax and semantics ensures that AI amplifies human expression instead of diminishing it.

With Geneva’s vibrant multilingual community at the forefront, linguists have a pivotal role in guiding how language and technology evolve together in this new frontier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSMC struggles to block chip exports to China

Taiwan Semiconductor Manufacturing Company (TSMC) has acknowledged it faces significant challenges in ensuring its advanced chips do not end up with sanctioned entities in China, despite tightening export controls.

The company admitted in its latest annual report that its position as a contract chipmaker limits its visibility into how and where its semiconductors are ultimately used.

Instead of directly selling finished products, TSMC manufactures chips for firms like Nvidia and Qualcomm, which are then integrated into a wide range of devices by third parties.

Α layered supply chain structure like this makes it difficult for the company to guarantee full compliance with export restrictions, especially when intermediaries may divert shipments intentionally.

TSMC halted deliveries to a customer last year after discovering one of its AI chips had been diverted to Huawei, a Chinese tech giant on the US sanctions list. The company promptly notified both Washington and Taipei and has since cooperated with official investigations and information requests.

The US continues to tighten restrictions on advanced chip exports to China, urging companies like TSMC and Samsung to apply stricter scrutiny.

Recently, Washington blacklisted 16 Chinese entities, including firms allegedly linked to the unauthorised transfer of TSMC chips. Despite best efforts, TSMC says there is no assurance it can completely prevent such incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup Cluely offers controversial cheating tool

A controversial new startup called Cluely has secured $5.3 million in seed funding to expand its AI-powered tool designed to help users ‘cheat on everything,’ from job interviews to exams.

Founded by 21-year-old Chungin ‘Roy’ Lee and Neel Shanmugam—both former Columbia University students—the tool works via a hidden browser window that remains invisible to interviewers or test supervisors.

The project began as ‘Interview Coder,’ originally intended to help users pass technical coding interviews on platforms like LeetCode.

Both founders faced disciplinary action at Columbia over the tool, eventually dropping out of the university. Despite ethical concerns, Cluely claims its technology has already surpassed $3 million in annual recurring revenue.

The company has drawn comparisons between its tool and past innovations like the calculator and spellcheck, arguing that it challenges outdated norms in the same way. A viral launch video showing Lee using Cluely on a date sparked backlash, with critics likening it to a scene from Black Mirror.

Cluely’s mission has sparked widespread debate over the use of AI in high-stakes settings. While some applaud its bold approach, others worry it promotes dishonesty.

Amazon, where Lee reportedly landed an internship using the tool, declined to comment on the case directly but reiterated that candidates must agree not to use unauthorised tools during the hiring process.

The startup’s rise comes amid growing concern over how AI may be used—or misused—in both professional and personal spheres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple tries makes climate progress with greener supply chain

Apple has made progress in reducing its environmental impact, according to the company’s own latest environmental progress report.

Its total greenhouse gas emissions dropped by 800,000 metric tons in 2024, marking a 5 percent reduction from the previous year.

Over the last decade, Apple has cut its global emissions by more than 60 percent, an achievement as emissions from other tech firms continue to rise due to the growing demands of AI.

The reduction stems from efforts to use renewable energy, increase recycling, and work with suppliers to cut emissions. Apple reported that its suppliers collectively avoided nearly 24 million metric tons of greenhouse gas emissions last year through cleaner energy and improved efficiency.

The company is also tackling highly potent fluorinated gases used in making semiconductors and displays, with all direct display suppliers and 26 semiconductor partners committing to reducing such emissions by at least 90 percent.

Recycled materials played a larger role in Apple’s products in 2024, making up nearly a quarter of all materials used. Notably, 80 percent of the rare earth elements and most of the tungsten, cobalt, and aluminium used came from recycled sources.

Despite these efforts, Apple still generated 15.3 million metric tons of CO₂ last year, though it aims to reduce emissions by 75 percent from 2015 levels by 2030 and eliminate 90 percent by 2050 to meet international climate goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Royal Unibrew embraces hybrid AI-human teams

Denmark’s Royal Unibrew has introduced five AI-generated ‘colleagues’ into its workforce, in a move the brewer describes as a step towards unlocking the full potential of its staff.

Designed by Danish firm Manifold AI, the digital assistants are integrated into daily operations, assisting with tasks such as market analysis, data management, and food pairing. Each AI has a name, backstory and face, which, according to the company, has significantly increased engagement among employees.

The virtual colleagues – named KondiKai, Athena, Prometheus, Moller and Ella – are used across departments via chat and email. Their arrival has helped streamline routine tasks, allowing human employees to focus on creative and strategic work.

According to staff, their input has improved efficiency, particularly by reducing time spent on searching past reports or handling emails. AI agent Athena, for example, assists with real-time market insights and report navigation for Royal Unibrew’s analysts.

While employees have welcomed the AI tools, managers caution that human judgement remains essential.

Marketing director Michala Svane believes the blend of digital and human capabilities can create more agile teams, but others stress the need for critical thinking when relying on machine-generated input.

Experts also raise questions about the long-term effects of such hybrid teams, including the psychological and social dynamics of working alongside AI ‘colleagues’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DOJ adds to Google’s antitrust trial the AI-driven search monopoly

The US Department of Justice (DOJ) launched its opening arguments this week in a landmark antitrust trial against Google, aiming to curb the tech giant’s dominance in online search and prevent it from leveraging AI to entrench its position further.

Prosecutors argue that Google’s market control is bolstered by exclusive contracts, such as being the default smartphone search engine, and now by integrating AI tools that guide users back to its ecosystem. 

The DOJ calls for decisive action, including the potential sale of Google’s Chrome browser and changes to its default settings agreements with device manufacturers.

Central to the DOJ’s argument is the concern that Google’s AI products, including its Gemini app installed on Samsung devices, create feedback loops reinforcing its search monopoly

Court documents reveal that Google pays Samsung a significant monthly sum for this privilege, with the deal potentially extending into 2028. 

The DOJ contends that remedies must be forward-looking to ensure competition as generative AI becomes increasingly intertwined with search.

Google, however, rejects the proposed measures as excessive. Its legal team argues that competitors perform well in AI without regulatory intervention and that forced divestitures or licensing obligations would harm innovation. 

The company insists that AI falls outside the scope of the case, which is focused on traditional search, and has pledged to appeal any adverse ruling. 

A key concern for Google is the DOJ’s suggestion that, should other remedies fail, the court could mandate the breakup of its Android mobile business, a move Google claims would disrupt the wider digital ecosystem.

DOJ officials emphasised that the legal remedies proposed are nonpartisan and reflect a consistent policy direction. Meanwhile, other tech giants, including Meta, are also under increasing scrutiny, with separate trials looming over market dominance and acquisitions.

AI startup caught in Dev Mode trademark row

Figma has issued a cease-and-desist letter to Swedish AI startup Loveable over the use of the term ‘Dev Mode,’ a name Figma trademarked in 2023.

Loveable recently introduced its own Dev Mode feature, prompting the design platform to demand the startup stop using the name, citing its established use and intellectual property rights.

Figma’s version of Dev Mode helps bridge the gap between designers and developers, while Loveable’s tool allows users to preview and edit code without linking to GitHub.

Despite their differing functions, Figma insists on protecting the trademark, even though ‘developer mode’ is a widely used phrase across many software platforms. Companies such as Atlassian and Wix have used similar terminology long before Figma obtained the trademark.

The legal move arrives as Figma prepares for an initial public offering, following Adobe’s failed acquisition attempt in 2023. The sudden emphasis on brand protection suggests the company is taking extra care with its intellectual assets ahead of its potential stock market debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!