Scammers use AI to fake British boutiques

Fraudsters are using AI-generated images and back stories to pose as British family businesses, luring shoppers into buying cheap goods from Asia. Websites claiming to be long-standing local boutiques have been linked to warehouses in China and Hong Kong.

Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.

Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.

Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms retinal scans into predictive health tools

AI and machine learning are transforming ophthalmology, making retinal scans powerful tools for predicting health risks. Retinal images offer a non-invasive view of blood vessels and nerve fibres, showing risks for high blood pressure, kidney, heart, and stroke-related issues.

With lifestyle-related illnesses on the rise, early detection through eye scans has become increasingly important.

Technologies like fundus photography and optical coherence tomography-angiography (OCT-A) now enable detailed imaging of retinal vessels. Researchers use AI to analyse these images, identifying microvascular biomarkers linked to systemic diseases.

Novel approaches such as ‘oculomics’ allow clinicians to predict surgical outcomes for macular hole treatment and assess patients’ risk levels for multiple conditions in one scan.

AI is also applied to diabetes screening, particularly in countries with significant at-risk populations. Deep learning frameworks can estimate average blood sugar levels (HbA1c) from retinal images, offering a non-invasive, cost-effective alternative to blood tests.

Despite its promise, AI in ophthalmology faces challenges. Limited and non-diverse datasets can reduce accuracy, and the ‘black box’ nature of AI decision-making can make doctors hesitant.

Collaborative efforts to share anonymised patient data and develop more transparent AI models are helping to overcome these hurdles, paving the way for safer and more reliable applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy passes Europe’s first national AI law

Italy has become the first EU country to pass a national AI law, introducing detailed rules to govern the development and use of AI technologies across key sectors such as health, work, and justice.

The law, approved by the Senate on 17 September and in effect on 10 October, defines new national authorities responsible for oversight, including the Agency for Digital Italy and the National Cybersecurity Agency. Both bodies will supervise compliance, security, and responsible use of AI systems.

In healthcare, the law simplifies data-sharing for scientific research by allowing the secondary use of anonymised or pseudonymised patient data. New rules also ensure transparency and consent when AI is used by minors under 14.

The law introduces criminal penalties for those who use AI-generated images or videos to cause harm or deception. The Italian approach combines regulation with innovation, seeking to protect citizens while promoting responsible growth in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-designed proteins surpass nature in genome editing

Researchers in Barcelona have developed synthetic proteins using generative AI that outperform natural ones at editing the human genome. The breakthrough, published in Nature Biotechnology, could transform treatments for cancer and rare genetic diseases.

The team from Integra Therapeutics, UPF and the CRG screened over 31,000 eukaryotic genomes, identifying more than 13,000 previously unknown PiggyBac transposase sequences. Experimental tests revealed ten active variants, two matching or exceeding current lab-optimised versions.

In the next phase, scientists trained a protein large language model on the newly discovered sequences to create entirely new proteins with improved genome-editing precision. The AI-generated enzymes worked efficiently in human T cells and proved compatible with Integra’s FiCAT gene-editing platform.

The Spanish researchers say the approach shows AI can expand biology’s own toolkit. By understanding the molecular ‘grammar’ of proteins, the model produced novel sequences that remain structurally and functionally sound.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot