AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions of customer records stolen in Kering luxury brand data breach

Kering has confirmed a data breach affecting several of its luxury brands, including Gucci, Balenciaga, Brioni, and Alexander McQueen, after unauthorised access to its Salesforce systems compromised millions of customer records.

Hacking group ShinyHunters has claimed responsibility, alleging it exfiltrated 43.5 million records from Gucci and nearly 13 million from the other brands. The stolen data includes names, email addresses, dates of birth, sales histories, and home addresses.

Kering stated that the incident occurred in June 2025 and did not compromise bank or credit card details or national identifiers. The company has reported the breach to the relevant regulators and is notifying the affected customers.

Evidence shared by ShinyHunters suggests Balenciaga made an initial ransom payment of €500,000 before negotiations broke down. The group released sample data and chat logs to support its claims.

ShinyHunters has exploited Salesforce weaknesses in previous attacks targeting luxury, travel, and financial firms. Questions remain about the total number of affected customers and the potential exposure of other Kering brands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex gets smarter with GPT-5, targets GitHub Copilot and rivals

OpenAI has optimised its new GPT-5 model for Codex, its agentic software development tool, boosting performance on both quick coding sessions and long, complex projects. CEO Sam Altman said Codex already accounts for 40% of platform traffic.

GPT-5 Codex can now build full projects, add features, run tests, refactor large codebases, and conduct detailed code reviews. It dynamically adjusts the time spent ‘thinking’ based on task complexity, allowing both interactive pair programming and extended autonomous work.

OpenAI stated that the model can run independently for over seven hours, completing refactorings, fixing test failures, and delivering finished code. Early tests indicate that it catches critical bugs more reliably, allowing developers to focus on the most important issues.

The upgraded Codex is available via terminal, IDE integrations, the web, and GitHub, and comes bundled with ChatGPT Plus, Pro, Business, Edu, and Enterprise subscriptions. OpenAI launched Codex CLI in April and a research preview in May.

With GPT-5 Codex, OpenAI aims to capture market share from GitHub Copilot, Google’s Gemini, Anthropic’s Claude, and startups such as Anysphere and Windsurf. The company claims the new version delivers faster, higher-quality results for developers at every stage of the software lifecycle.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European regulators push for stronger oversight in crypto sector

European regulators from Italy, France, and Austria have called for changes to the EU’s Markets in Crypto-Assets Regulation (MiCA). Their proposals aim to fix supervisory gaps, improve cybersecurity, and simplify token white paper approvals.

The regulation, which came into force in December 2024, requires prior authorisation for firms offering crypto-related services in Europe. However, early enforcement has shown significant gaps in how national authorities apply the rules.

Regulators argue these differences undermine investor protection and threaten the stability of the European internal market.

Concerns have also been raised about non-EU platforms serving European clients through intermediaries outside MiCA’s scope. To counter this, authorities recommend restricting such activity and ensuring intermediaries only use platforms compliant with MiCA or equivalent standards.

Additional measures include independent cybersecurity audits, mandatory both before and after authorisation, to bolster resilience against cyber-attacks.

The proposals suggest giving ESMA direct oversight of major crypto providers and centralising white paper filings. Regulators say the changes would boost legal clarity, cut investor risks, and level the field for European firms against global rivals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK to benefit from Google’s £5 billion AI plan

Google has unveiled plans to invest £5 billion (around $6.8 billion) in the UK’s AI economy over the next two years.

An announcement comes just hours before US President Donald Trump’s official visit to the country, during which economic agreements worth more than $10 billion are expected.

The investment will include establishing a new AI data centre in Waltham Cross, Hertfordshire, designed to meet growing demand for services like Google Cloud.

Alongside the facility, funds will be channelled into research and development, capital expenditure, engineering, and DeepMind’s work applying AI to science and healthcare. The project is expected to generate 8,250 annual jobs for British companies.

Google also revealed a partnership with Shell to support grid stability and contribute to the UK’s energy transition. The move highlights the economic and environmental stakes tied to AI expansion, as the UK positions itself as a hub for advanced digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PDGrapher AI tool aims to speed up precision medicine development

Harvard Medical School researchers have developed an AI tool that could transform drug discovery by identifying multiple drivers of disease and suggesting treatments to restore cells to a healthy state.

The model, called PDGrapher, utilises graph neural networks to map the relationships between genes, proteins, and cellular pathways, thereby predicting the most effective targets for reversing disease. Unlike traditional approaches that focus on a single protein, it considers multiple factors at once.

Trained on datasets of diseased cells before and after treatment, PDGrapher correctly predicted known drug targets and identified new candidates supported by emerging research. The model ranked potential targets up to 35% higher and worked 25 times faster than comparable tools.

Researchers are now applying PDGrapher to complex diseases such as Parkinson’s, Alzheimer’s, and various cancers, where single-target therapies often fail. By identifying combinations of targets, the tool can help overcome drug resistance and expedite treatment design.

Senior author Marinka Zitnik said the ultimate goal is to create a cellular ‘roadmap’ to guide therapy development and enable personalised treatments for patients. After further validation, PDGrapher could become a cornerstone in precision medicine.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lumex chips bring advanced AI to mobile devices

Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.

The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.

Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.

The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!