Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry faces recalibration as Altman delays AGI

OpenAI CEO Sam Altman has again adjusted his timeline for achieving artificial general intelligence (AGI). After earlier forecasts for 2023 and 2025, Altman suggests 2030 as a more realistic milestone. The move reflects mounting pressure and shifting expectations in the AI sector.

OpenAI’s public projections come amid challenging financials. Despite a valuation near $500 billion, the company reportedly lost $5 billion last year on $3.7 billion in revenue. Investors remain drawn to ambitious claims of AGI, despite widespread scepticism. Predictions now span from 2026 to 2060.

Experts question whether AGI is feasible under current large language model (LLM) architectures. They point out that LLMs rely on probabilistic patterns in text, lack lived experience, and cannot develop human judgement or intuition from data alone.

Another point of critique is that text-based models cannot fully capture embodied expertise. Fields like law, medicine, or skilled trades depend on hands-on training, tacit knowledge, and real-world context, where AI remains fundamentally limited.

As investors and commentators calibrate expectations, the AI industry may face a reckoning. Altman’s shifting forecasts underscore how hype and uncertainty continue to shape the race toward perceived machine-level intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DualEntry raises $90m to scale AI-first ERP platform

New York ERP startup DualEntry has emerged from stealth with $90 million in Series A funding, co-led by Lightspeed and Khosla Ventures. Investors include GV, Contrary, and Vesey Ventures, bringing the total funding to more than $100 million within 18 months of the company’s founding.

The capital will accelerate the growth of its AI-native ERP platform, which has processed $100 billion in journal entries. The platform targets mid-market finance teams, aiming to automate up to 90% of manual tasks and scale without external IT support or add-ons.

Early adopters include fintech firm Slash, which runs its $100M+ ARR operation with a single finance employee. DualEntry offers a comprehensive ERP suite that covers general ledger, accounts receivable, accounts payable, audit controls, FP&A, and live bank connections.

The company’s NextDay Migration tool enables complete onboarding within 24 hours, securely transferring all data, including subledgers and attachments. With more than 13,000 integrations across banking, CRM, and HR systems, DualEntry establishes a centralised source of accounting information.

Founded in 2024 by Benedict Dohmen and Santiago Nestares, the startup positions itself as a faster, more flexible alternative to legacy systems such as NetSuite, Sage Intacct, and Microsoft Dynamics, while supporting starter tools like QuickBooks and Xero.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!