Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A new AI strategy by the EU to cut reliance on the US and China

The EU is preparing to unveil a new strategy to reduce reliance on American and Chinese technology by accelerating the growth of homegrown AI.

The ‘Apply AI strategy’, set to be presented by the EU tech chief Henna Virkkunen, positions AI as a strategic asset essential for the bloc’s competitiveness, security and resilience.

According to draft documents, the plan will prioritise adopting European-made AI tools across healthcare, defence and manufacturing.

Public administrations are expected to play a central role by integrating open-source EU AI systems, providing a market for local start-ups and reducing dependence on foreign platforms. The Commission has pledged €1bn from existing financing programmes to support the initiative.

Brussels has warned that foreign control of the ‘AI stack’ (the hardware and software that underpin advanced systems) could be ‘weaponised’ by state and non-state actors.

These concerns have intensified following Europe’s continued dependence on American tech infrastructure. Meanwhile, China’s rapid progress in AI has further raised fears that the Union risks losing influence in shaping the technology’s future.

Several high-potential AI firms have already been hosted by the EU, including France’s Mistral and Germany’s Helsing. However, they rely heavily on overseas suppliers for software, hardware, and critical minerals.

The Commission wants to accelerate the deployment of European AI-enabled defence tools, such as command-and-control systems, which remain dependent on NATO and US providers. The strategy also outlines investment in sovereign frontier models for areas like space defence.

President Ursula von der Leyen said the bloc aims to ‘speed up AI adoption across the board’ to ensure it does not miss the transformative wave.

Brussels hopes to carve out a more substantial global role in the next phase of technological competition by reframing AI as an industrial sovereignty and security instrument.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry faces recalibration as Altman delays AGI

OpenAI CEO Sam Altman has again adjusted his timeline for achieving artificial general intelligence (AGI). After earlier forecasts for 2023 and 2025, Altman suggests 2030 as a more realistic milestone. The move reflects mounting pressure and shifting expectations in the AI sector.

OpenAI’s public projections come amid challenging financials. Despite a valuation near $500 billion, the company reportedly lost $5 billion last year on $3.7 billion in revenue. Investors remain drawn to ambitious claims of AGI, despite widespread scepticism. Predictions now span from 2026 to 2060.

Experts question whether AGI is feasible under current large language model (LLM) architectures. They point out that LLMs rely on probabilistic patterns in text, lack lived experience, and cannot develop human judgement or intuition from data alone.

Another point of critique is that text-based models cannot fully capture embodied expertise. Fields like law, medicine, or skilled trades depend on hands-on training, tacit knowledge, and real-world context, where AI remains fundamentally limited.

As investors and commentators calibrate expectations, the AI industry may face a reckoning. Altman’s shifting forecasts underscore how hype and uncertainty continue to shape the race toward perceived machine-level intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DualEntry raises $90m to scale AI-first ERP platform

New York ERP startup DualEntry has emerged from stealth with $90 million in Series A funding, co-led by Lightspeed and Khosla Ventures. Investors include GV, Contrary, and Vesey Ventures, bringing the total funding to more than $100 million within 18 months of the company’s founding.

The capital will accelerate the growth of its AI-native ERP platform, which has processed $100 billion in journal entries. The platform targets mid-market finance teams, aiming to automate up to 90% of manual tasks and scale without external IT support or add-ons.

Early adopters include fintech firm Slash, which runs its $100M+ ARR operation with a single finance employee. DualEntry offers a comprehensive ERP suite that covers general ledger, accounts receivable, accounts payable, audit controls, FP&A, and live bank connections.

The company’s NextDay Migration tool enables complete onboarding within 24 hours, securely transferring all data, including subledgers and attachments. With more than 13,000 integrations across banking, CRM, and HR systems, DualEntry establishes a centralised source of accounting information.

Founded in 2024 by Benedict Dohmen and Santiago Nestares, the startup positions itself as a faster, more flexible alternative to legacy systems such as NetSuite, Sage Intacct, and Microsoft Dynamics, while supporting starter tools like QuickBooks and Xero.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Diag2Diag brings fusion reactors closer to commercial viability

Researchers have developed an AI tool that could make fusion power more reliable and affordable. Diag2Diag reconstructs missing sensor data to give scientists a clearer view of plasma, helping address one of fusion energy’s biggest challenges.

Developed through a collaboration led by Princeton University and the US Department of Energy’s Princeton Plasma Physics Laboratory, Diag2Diag analyses multiple diagnostics in real time to generate synthetic, high-resolution data. It improves plasma control and cuts reliance on costly hardware.

A key use of Diag2Diag is improving the study of the plasma pedestal, the fuel’s outer layer. Current methods miss sudden changes or lack detail. The AI fills these gaps without new instruments, helping researchers fine-tune stability.

The system has also advanced research into edge-localised modes, or ELMs, which are bursts of energy that can damage reactor walls. It revealed how magnetic perturbations create ‘magnetic islands’ that flatten plasma temperature and density, supporting a leading theory on ELM suppression.

Although designed for fusion, Diag2Diag could also enhance reliability in fields such as spacecraft monitoring and robotic surgery. For fusion specifically, it supports smaller, cheaper, and more dependable reactors, bringing the prospect of clean, round-the-clock power closer to reality.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!