AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry faces recalibration as Altman delays AGI

OpenAI CEO Sam Altman has again adjusted his timeline for achieving artificial general intelligence (AGI). After earlier forecasts for 2023 and 2025, Altman suggests 2030 as a more realistic milestone. The move reflects mounting pressure and shifting expectations in the AI sector.

OpenAI’s public projections come amid challenging financials. Despite a valuation near $500 billion, the company reportedly lost $5 billion last year on $3.7 billion in revenue. Investors remain drawn to ambitious claims of AGI, despite widespread scepticism. Predictions now span from 2026 to 2060.

Experts question whether AGI is feasible under current large language model (LLM) architectures. They point out that LLMs rely on probabilistic patterns in text, lack lived experience, and cannot develop human judgement or intuition from data alone.

Another point of critique is that text-based models cannot fully capture embodied expertise. Fields like law, medicine, or skilled trades depend on hands-on training, tacit knowledge, and real-world context, where AI remains fundamentally limited.

As investors and commentators calibrate expectations, the AI industry may face a reckoning. Altman’s shifting forecasts underscore how hype and uncertainty continue to shape the race toward perceived machine-level intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!